CN111539311B - Living body judging method, device and system based on IR and RGB double shooting - Google Patents

Living body judging method, device and system based on IR and RGB double shooting Download PDF

Info

Publication number
CN111539311B
CN111539311B CN202010316897.4A CN202010316897A CN111539311B CN 111539311 B CN111539311 B CN 111539311B CN 202010316897 A CN202010316897 A CN 202010316897A CN 111539311 B CN111539311 B CN 111539311B
Authority
CN
China
Prior art keywords
living body
face image
rgb
camera
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010316897.4A
Other languages
Chinese (zh)
Other versions
CN111539311A (en
Inventor
陈辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Kaike Intelligent Technology Co ltd
Original Assignee
Shanghai Kaike Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Kaike Intelligent Technology Co ltd filed Critical Shanghai Kaike Intelligent Technology Co ltd
Priority to CN202010316897.4A priority Critical patent/CN111539311B/en
Publication of CN111539311A publication Critical patent/CN111539311A/en
Application granted granted Critical
Publication of CN111539311B publication Critical patent/CN111539311B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The embodiment of the invention discloses a living body judging method, a device and a system based on IR and RGB double shooting, wherein the method comprises the following steps: obtaining a video stream to be detected through a binocular module, and carrying out face detection on the video stream to be detected to obtain an RGB face image and a first IR face image; inputting the first IR face image into a convolutional neural network to perform living body judgment so as to obtain a first living body judgment result and a second IR face image; and inputting the RGB face image and the second IR face image into a twin network for living body judgment so as to obtain a second living body judgment result. According to the embodiment of the invention, the IR material imaging different information and binocular parallax information are fully used, and the face living body judgment is carried out in a progressive mode, so that the accuracy of the living body judgment is improved.

Description

Living body judging method, device and system based on IR and RGB double shooting
Technical Field
The invention relates to the technical field of computer vision, in particular to a living body judging method, device and system based on IR and RGB double shooting.
Background
The existing living body distinguishing method mainly comprises the following three steps:
(1) The infrared imaging is only used for carrying out living judgment on reflected infrared rays of different materials such as screen play, photographic paper, color high-definition printing paper and the like, and the living judgment is simple but is easy to misdetect and miss;
(2) And training the eye region map by adopting the imaging difference of the infrared image face and eye region to obtain a living body discrimination network. The method has higher requirements on infrared imaging quality, and is unfavorable for low-cost use.
(3) According to the judgment whether the intensity of the reflected light of the infrared light on the characteristic points of the face is in the range or not, the method has high dependence on the precision of the brightness, the distance and the like of the light source, and is easy to influence to cause unstable results.
Disclosure of Invention
Aiming at the technical defects, the embodiment of the invention provides a living body judging method, device and system based on IR and RGB double shooting.
To achieve the above object, in a first aspect, an embodiment of the present invention provides a living body discriminating method based on IR and RGB double photographing, including:
obtaining a video stream to be detected through a binocular module, wherein the binocular module comprises an RGB camera and an IR camera;
performing face detection on the video stream to be detected to obtain an RGB face image and a first IR face image;
inputting the first IR face image into a convolutional neural network to perform living body judgment so as to obtain a first living body judgment result and a second IR face image;
and inputting the RGB face image and the second IR face image into a twin network to perform living body judgment so as to obtain a second living body judgment result.
As a specific embodiment of the present application, obtaining the first living body determination result and the first target IR face image specifically includes:
inputting the first IR face image into the convolutional neural network to perform living body judgment so as to obtain living body probability;
if the living body probability is larger than a preset value, determining the first IR face image corresponding to the living body probability as a first living body judging result;
and if the living body probability is smaller than a preset value, determining the first IR face image corresponding to the living body probability as the second IR face image.
As a specific embodiment of the present application, obtaining the second living body determination result specifically includes:
inputting the RGB face image and the second IR face image into the twin network, extracting features of the RGB face image and the second IR face image by the twin network, performing full-connection classification on the extracted features after performing difference processing to obtain a second living body judging result, and performing spoof alarming on a non-living body obtained after the full-connection classification.
As a preferred embodiment of the present application, after obtaining the RGB face image and the first IR face image, the method further includes:
and calculating the face size through a binocular epipolar relation formed by the RGB camera and the IR camera, comparing the face size with a preset reasonable face size range, discarding the RGB face image and the first IR face image if the face size does not fall into the reasonable face size range, and carrying out spoofe warning.
As a preferred embodiment of the present application, before the video stream to be detected is acquired by the binocular module, the method further includes:
collecting a plurality of live pictures of a real person and print attack and mask attack pictures through the IR camera to serve as first training samples, and obtaining the convolutional neural network based on the first training samples;
and acquiring a plurality of live human pictures and print attack and mask attack pictures through the RGB camera and the IR camera to serve as second training samples, and obtaining the twin network based on the second training samples.
As a preferred embodiment of the present application, after obtaining the second living body determination result, the method further includes:
and sending the first living body judging result and the second living body judging result to external equipment for display.
In a second aspect, an embodiment of the present invention provides a living body discriminating apparatus based on IR and RGB dual photographing, including a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, where the memory is configured to store a computer program, the computer program includes program instructions, and the processor is configured to invoke the program instructions to execute the method of the first aspect.
In a third aspect, embodiments of the present invention provide a computer readable storage medium storing a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of the first aspect described above.
In a fourth aspect, the embodiment of the invention further provides a living body distinguishing system based on the IR and RGB double-shot, which comprises a binocular module, an infrared light supplementing lamp, a living body distinguishing device and external equipment, wherein the binocular module comprises an RGB camera and an IR camera, and the infrared light supplementing lamp is used for supplementing light to the IR camera. The binocular module is used for collecting video streams to be detected, the living body distinguishing device is respectively communicated with the binocular module and external equipment, and the living body distinguishing device is as described in the second aspect.
By implementing the embodiment of the invention, the video stream to be detected is subjected to face detection to obtain an RGB face image and a first IR face image; inputting the first IR face image into a convolutional neural network for living body judgment to obtain a first living body judgment result and a second IR face image (namely, the part of the convolutional neural network which is not judged to be a living body); finally, inputting the RGB face image and the second target IR face image into a twin network for living body discrimination to obtain a second living body discrimination result; the embodiment of the invention fully uses the different information of IR material imaging and binocular parallax information to carry out the face living body judgment in a progressive mode, thereby improving the accuracy of living body judgment.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
Fig. 1 is a flowchart of a living body discriminating method based on IR and RGB double photographing provided in the first embodiment of the present invention;
FIG. 2 is a schematic diagram of a binocular module used in an embodiment of the present invention;
FIG. 3 is a schematic diagram of a first stage of binocular living body discrimination;
FIG. 4 is a schematic diagram of a second stage of binocular living body discrimination;
FIG. 5 is a schematic diagram of a third stage of binocular recognition;
fig. 6 is a schematic structural diagram of a living body discriminating system based on IR and RGB double photographing according to the embodiment of the present invention;
fig. 7 is a schematic structural view of the living body discriminating apparatus shown in fig. 6.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention is characterized in that: the method is characterized in that an IR and RGB double-shot image is used in combination, imaging of the IR on a screen and a photographic paper attack is remarkably different from shallow to deep, the face attack with unreasonable size is eliminated by binocular range measurement, the IR on living bodies, faces of other materials and imaging differences around the living bodies and faces are classified by a convolutional neural network, and the characteristic differences of the binocular matched IR and RGB faces are extracted by a binocular convolutional neural network to carry out living body judgment.
Based on the above inventive concept, please refer to fig. 1, a first embodiment of the present invention provides a living body discriminating method based on IR and RGB, comprising:
s101, acquiring a sample image, and training a convolutional neural network and a twin network based on the sample image.
Before performing the living body discrimination, the convolutional neural network and the twin network need to be trained. In the case of collecting the sample image, a binocular module as shown in fig. 2 is used in this embodiment. As shown in fig. 2, the binocular module used in the embodiment of the present invention includes an RGB camera, an IR camera, an infrared light supplement lamp, a display or a functional area. The RGB camera and the IR camera are used for collecting video streams or sample images, the infrared light supplementing lamp is used for supplementing light for the RGB camera and the IR camera, and the display or functional area is used for displaying the recognized living body result.
Specifically, when training the convolutional neural network, training is performed by collecting pictures of a plurality of Zhang Zhen human living bodies as training samples only through an IR camera, for example, more than 5000 pictures, or collecting 5000 pictures of face which are printed in high definition color and attacked by black and white printing as training samples. In this embodiment, the convolutional neural network obtained by training is a 224X224 input MobileNetV2 network.
Specifically, when training the twin network, images of a plurality of Zhang Zhen human living bodies are collected as training samples by the RGB camera and the IR camera at the same time for training. In this embodiment, the backbone portion of the gemini network obtained by training is the MobileNetV2 network.
S102, obtaining a video stream to be detected through a binocular module.
S103, carrying out face detection on the video stream to be detected to obtain an RGB face image and a first IR face image.
S104, inputting the first IR face image into a convolutional neural network to perform living body judgment so as to obtain a first living body judgment result and a second IR face image.
Specifically, step S104 includes:
(1) Inputting the first IR face image into a convolutional neural network to perform living body judgment so as to obtain living body probability;
(2) If the living body probability is larger than a preset value, determining a first IR face image corresponding to the living body probability as a first living body judging result;
(3) If the living body probability is smaller than the preset value, the first IR face image corresponding to the living body probability is determined to be the second IR face image.
It should be noted that, in step S104, the IR face meeting the size judgment requirement is sent to the convolutional neural network to perform living body judgment, the living body probability is determined to be greater than the preset value, and if the living body probability is smaller than the preset value, a spoofe alarm is performed, and the second IR image is determined to be the second IR image, so as to perform the next living body judgment.
S105, inputting the RGB face image and the second IR face image into a twin network for living body judgment to obtain a second living body judgment result.
It should be noted that, the IR face which cannot be determined in step S104 and the RGB face obtained by the foregoing detection are input into a twin network, the twin network performs feature extraction on the RGB face image and the second IR face image, performs difference processing on the extracted features, and then performs full connection classification, so as to obtain a second living body determination result, and performs a spoof alarm on the non-living body obtained after full connection classification.
S106, the first living body judging result and the second living body judging result are sent to an external device to be displayed.
Specifically, the living body determination results obtained in steps S104 and S105 are displayed in the display or functional area shown in fig. 2.
The RGB face and the IR face obtained in step S103 are sent to steps S104 and S105 to perform the living body recognition, and also to perform the size decision. The method comprises the following steps: and calculating the face size through the binocular epipolar relation formed by the RGB camera and the IR camera, comparing the face size with a preset reasonable face size range, discarding the RGB face image and the first IR face image if the face size does not fall into the reasonable face size range, and carrying out spoofe warning, as shown in figure 3.
The calibration of the camera adopts a Zhang Zhengyou checkerboard mode, and the face size is calculated by adopting the following mode:
let the parallax be d, the focal length be f, the baseline length b, then the depth:
z=f*b/d
the measured precision delta_z is that
delta_z=f*b/d0-f*b/d1
For one camera, the value of d needs to be determined by the pixel and the pixel size;
d=d_pixel*pixel_size
after calculating Z, the size of the face can be calculated according to the number of pixels of the face according to the similar triangle.
Further, referring to fig. 3 to 5, the living body discriminating section in the embodiment of the present invention is performed in a progressive manner, and mainly includes three stages:
the first stage: as shown in fig. 3, after correcting the RGB camera and the IR, face detection is started, the RGB camera performs operations such as face detection and scoring to obtain an RGB face, and the IR camera performs operations such as face detection and brightness contrast detection to obtain an IR face; if the RGB camera detects the face and the IR camera does not detect the face, the spoif alarm is carried out; for detected RGB faces and IR faces, matching operation is carried out in FIG. 3, if the detected RGB faces and IR faces are not matched, a spof alarm is carried out; in addition, the face size is calculated through the binocular epipolar relation formed by the RGB camera and the IR camera, compared with the range of the preset reasonable face size, the range is not discarded, and the spoof alarm is carried out. .
It should be noted that, in the first stage, the screen display of the mobile phone, pad, computer, etc. mainly uses the characteristic of visible light to exclude the screen attack of the electronic device. And the characteristic that the smooth photographic paper can not display contents under the near infrared rays is utilized, so that the face attack of the smooth photographic paper is eliminated. And (3) calculating parallax by combining the face images printed by the paper with good diffuse reflection on near infrared rays with the face images in RGB (red, green and blue) so as to obtain distance information and face size information, and eliminating the attack of unreasonable face size.
The calibration of the camera adopts a Zhang Zhengyou checkerboard mode, and the face size is calculated by adopting the following mode:
let the parallax be d, the focal length be f, the baseline length b, then the depth:
z=f*b/d
the measured precision delta_z is that
delta_z=f*b/d0-f*b/d1
For one camera, the value of d needs to be determined by the pixel and the pixel size;
d=d_pixel*pixel_size
after calculating Z, the size of the face can be calculated according to the number of pixels of the face according to the similar triangle.
And a second stage: as shown in fig. 4, for the IR face not excluded in fig. 3, a CNN network is sent to make a decision as to whether or not it is a living body.
The IR living body discriminating network uses a MobileNetV2 network inputted by 224X224, the IR face is aligned according to the landmark, and the IR face is sent to the IR living body discriminating network, the living body probability is greater than thd0, the living body is judged, and the bloom alarm is performed when the living body probability is less than thd 1.
And a third stage: as shown in fig. 5, for the faces still unable to be judged, the RGB faces are aligned according to the landmark, and sent to the CNN twin network_a, the IR faces are aligned according to the landmark, and sent to the CNN twin network_b, and then the characteristics are bad, and the living probability is obtained through full connection classification, and for the living probability being greater than thd0, the living probability is judged as living, and for the living probability being less than thd1, the spof alarm is performed.
The RGB/IR face living body judging twin network uses a MobileNet V2 network as backbone, weight sharing is carried out, and the characteristics extracted by the twin network are classified into living bodies and non-living bodies through full connection after being poor.
Compared with the prior art, the embodiment of the invention has the following advantages:
(1) Different information of IR material imaging and binocular parallax information are fully used, and human face living body judgment is carried out in a progressive mode, so that the accuracy of living body judgment is improved.
(2) The invention has low requirement on infrared imaging quality, has no special requirement on the infrared light supplementing lamp, and is convenient for mass production and low-cost deployment.
(3) The reflection intensity of IR light filling and the like on different distances is creatively utilized, the difference of the reflection of the face printed on the paper and the living face on the IR and the difference of the depth difference of the living face and the background and the depth difference of the face and the background printed on the paper can be learned, and the living body discrimination rate is improved.
(4) The creative structure of the geminate network carries out feature extraction end-to-end training on the IR and RGB images, is beneficial to learning information of different parallaxes of living faces and plane faces and learning information of different reflections of IR and RGB by other materials near the faces.
Furthermore, the embodiment of the invention also solves the problem that the living body judging result is not robust enough by using a single IR image, solves the problem that the IR light source or IR imaging quality is high in requirement and is not easy to be deployed at low cost in the prior art, solves the problem that the 3D imaging difference and the material reflection difference of the same person imaging by RGB and IR are not fully utilized in the prior art, and improves the robustness of a living body algorithm.
Based on the same inventive concept, the embodiment of the invention provides a living body discriminating system based on IR and RGB double photographing. As shown in fig. 6, the system includes a living body discriminating apparatus 100, a binocular module 200, an infrared light compensating lamp 300 and an external device 400, wherein the binocular module 200 includes an RGB camera and an IR camera, the infrared light compensating lamp 300 is used for compensating light for the IR camera, the binocular module 200 is used for collecting video streams to be detected, the living body discriminating apparatus 100 is respectively communicated with the binocular module 200 and the external device 400, and the external device 400 is used for displaying living body discriminating results obtained by the living body discriminating apparatus 100.
As shown in fig. 7, the living body discriminating apparatus 100 may include: one or more processors 101, one or more input devices 102, one or more output devices 103, and a memory 104, the processors 101, input devices 102, output devices 103, and memory 104 being interconnected by a bus 105. The memory 104 is used for storing a computer program comprising program instructions, the processor 101 being configured to invoke the program instructions for performing the method of the above-described embodiment of the in-vivo discrimination method section based on IR and RGB dual-shot.
It should be appreciated that in embodiments of the present invention, the processor 101 may be a central processing unit (Central Processing Unit, CPU), which may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSPs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 102 may include a keyboard or the like, and the output device 103 may include a display (LCD or the like), a speaker or the like.
The memory 104 may include read only memory and random access memory and provides instructions and data to the processor 101. A portion of the memory 104 may also include non-volatile random access memory. For example, the memory 104 may also store information of device type.
In a specific implementation, the processor 101, the input device 102, and the output device 103 described in the embodiments of the present invention may execute the implementation described in the embodiment of the method for determining a living body based on IR and RGB double-shot provided in the first embodiment of the present invention, which is not described herein.
Further, corresponding to the living body discriminating method and the living body discriminating apparatus based on the IR and RGB double photographing of the first embodiment, the embodiment of the invention further provides a readable storage medium storing a computer program comprising program instructions which when executed by a processor realize: the living body discriminating method based on the IR and RGB double photographing of the first embodiment described above.
The computer readable storage medium may be an internal storage unit of the living body discriminating apparatus described in the foregoing embodiment, such as a hard disk or a memory of the system. The computer readable storage medium may also be an external storage device of the system, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the system. Further, the computer readable storage medium may also include both internal storage units and external storage devices of the system. The computer readable storage medium is used to store the computer program and other programs and data required by the system. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (6)

1. A living body discriminating method based on IR and RGB double photographing, characterized by comprising:
obtaining a video stream to be detected through a binocular module, wherein the binocular module comprises an RGB camera, an IR camera, an infrared light supplementing lamp and a display or functional area; the RGB camera and the IR camera are used for collecting video streams or sample images, the infrared light supplementing lamp is used for supplementing light for the RGB camera and the IR camera, and the display or functional area is used for displaying the identified living body judging result;
performing face detection on the video stream to be detected to obtain an RGB face image and a first IR face image;
calculating the face size through a binocular epipolar relation formed by the RGB camera and the IR camera, comparing the face size with a preset reasonable face size range, discarding the RGB face image and the first IR face image if the face size does not fall into the reasonable face size range, and carrying out spoofe warning;
inputting the first IR face image into a convolutional neural network to perform living body judgment so as to obtain a first living body judgment result and a second IR face image;
inputting the RGB face image and the second IR face image into a twin network for living body discrimination to obtain a second living body discrimination result;
inputting the RGB face image and the second IR face image into the twin network, extracting features of the RGB face image and the second IR face image by the twin network, performing full-connection classification on the extracted features after performing difference processing to obtain a second living body judging result, and performing spoof alarming on a non-living body obtained after the full-connection classification;
the first living body judging result and the second IR face image are obtained specifically as follows: inputting the first IR face image into the convolutional neural network to perform living body judgment so as to obtain living body probability;
if the living body probability is larger than a preset value, determining the first IR face image corresponding to the living body probability as a first living body judging result;
and if the living body probability is smaller than a preset value, determining the first IR face image corresponding to the living body probability as the second IR face image.
2. The method of claim 1, wherein prior to obtaining the video stream to be detected by the binocular module, the method further comprises:
collecting a plurality of live pictures of a real person and print attack and mask attack pictures through the IR camera to serve as first training samples, and obtaining the convolutional neural network based on the first training samples;
and acquiring a plurality of live human pictures and print attack and mask attack pictures through the RGB camera and the IR camera to serve as second training samples, and obtaining the twin network based on the second training samples.
3. The method according to claim 2, wherein after obtaining the second living body discrimination result, the method further comprises:
and sending the first living body judging result and the second living body judging result to external equipment for display.
4. A living body discriminating apparatus based on IR and RGB double photographing, characterized by comprising a processor, an input device, an output device and a memory, which are connected to each other, wherein the memory is for storing a computer program comprising program instructions, the processor being configured to invoke the program instructions to execute the method of claim 3.
5. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of claim 3.
6. The utility model provides a living body distinguishing system based on IR and RGB double shot, includes binocular module, infrared light filling lamp, living body distinguishing device and external equipment, binocular module includes RGB camera and IR camera, infrared light filling lamp is used for right the IR camera carries out the light filling, its characterized in that, binocular module is used for gathering the video stream that waits to detect, living body distinguishing device respectively with binocular module and external equipment communication, living body distinguishing device is as claimed in claim 4.
CN202010316897.4A 2020-04-21 2020-04-21 Living body judging method, device and system based on IR and RGB double shooting Active CN111539311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010316897.4A CN111539311B (en) 2020-04-21 2020-04-21 Living body judging method, device and system based on IR and RGB double shooting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010316897.4A CN111539311B (en) 2020-04-21 2020-04-21 Living body judging method, device and system based on IR and RGB double shooting

Publications (2)

Publication Number Publication Date
CN111539311A CN111539311A (en) 2020-08-14
CN111539311B true CN111539311B (en) 2024-03-01

Family

ID=71975221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010316897.4A Active CN111539311B (en) 2020-04-21 2020-04-21 Living body judging method, device and system based on IR and RGB double shooting

Country Status (1)

Country Link
CN (1) CN111539311B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112347904B (en) * 2020-11-04 2023-08-01 杭州锐颖科技有限公司 Living body detection method, device and medium based on binocular depth and picture structure
CN112464741B (en) * 2020-11-05 2021-11-26 马上消费金融股份有限公司 Face classification method, model training method, electronic device and storage medium
CN113221830B (en) * 2021-05-31 2023-09-01 平安科技(深圳)有限公司 Super-division living body identification method, system, terminal and storage medium
CN113255586B (en) * 2021-06-23 2024-03-15 中国平安人寿保险股份有限公司 Face anti-cheating method based on RGB image and IR image alignment and related equipment
CN115623291A (en) * 2022-08-12 2023-01-17 深圳市新良田科技股份有限公司 Binocular camera module, service system and face verification method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590430A (en) * 2017-07-26 2018-01-16 百度在线网络技术(北京)有限公司 Biopsy method, device, equipment and storage medium
CN108764071A (en) * 2018-05-11 2018-11-06 四川大学 It is a kind of based on infrared and visible images real human face detection method and device
CN109711243A (en) * 2018-11-01 2019-05-03 长沙小钴科技有限公司 A kind of static three-dimensional human face in-vivo detection method based on deep learning
CN110516616A (en) * 2019-08-29 2019-11-29 河南中原大数据研究院有限公司 A kind of double authentication face method for anti-counterfeit based on extensive RGB and near-infrared data set
CN110852134A (en) * 2018-07-27 2020-02-28 北京市商汤科技开发有限公司 Living body detection method, living body detection device, living body detection system, electronic device, and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10657401B2 (en) * 2017-06-06 2020-05-19 Microsoft Technology Licensing, Llc Biometric object spoof detection based on image intensity variations

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590430A (en) * 2017-07-26 2018-01-16 百度在线网络技术(北京)有限公司 Biopsy method, device, equipment and storage medium
CN108764071A (en) * 2018-05-11 2018-11-06 四川大学 It is a kind of based on infrared and visible images real human face detection method and device
CN110852134A (en) * 2018-07-27 2020-02-28 北京市商汤科技开发有限公司 Living body detection method, living body detection device, living body detection system, electronic device, and storage medium
CN109711243A (en) * 2018-11-01 2019-05-03 长沙小钴科技有限公司 A kind of static three-dimensional human face in-vivo detection method based on deep learning
CN110516616A (en) * 2019-08-29 2019-11-29 河南中原大数据研究院有限公司 A kind of double authentication face method for anti-counterfeit based on extensive RGB and near-infrared data set

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
an identity authentication method combining liveness detection and face recognition;shuhua liu et al.;《MDPI》;20191031;第1-10页 *
基于微调策略的多线索融合人脸活体检测;胡斐;文畅;谢凯;贺建飚;;计算机工程;20180627(第05期);第262-266页 *

Also Published As

Publication number Publication date
CN111539311A (en) 2020-08-14

Similar Documents

Publication Publication Date Title
CN111539311B (en) Living body judging method, device and system based on IR and RGB double shooting
CN108764071A (en) It is a kind of based on infrared and visible images real human face detection method and device
CN107481186B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN111368601B (en) Living body detection method and apparatus, electronic device, and computer-readable storage medium
US6370262B1 (en) Information processing apparatus and remote apparatus for object, using distance measuring apparatus
CN106570899B (en) Target object detection method and device
CN109656033B (en) Method and device for distinguishing dust and defects of liquid crystal display screen
CN109685853B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN107704798B (en) Image blurring method and device, computer readable storage medium and computer device
CN113673584A (en) Image detection method and related device
CN111837158A (en) Image processing method and device, shooting device and movable platform
CN109711375B (en) Signal lamp identification method and device
CN107018407B (en) Information processing device, evaluation chart, evaluation system, and performance evaluation method
CN113822942B (en) Method for measuring object size by monocular camera based on two-dimensional code
CN109559353A (en) Camera module scaling method, device, electronic equipment and computer readable storage medium
CN111325107A (en) Detection model training method and device, electronic equipment and readable storage medium
CN111127358B (en) Image processing method, device and storage medium
CN111383254A (en) Depth information acquisition method and system and terminal equipment
US9536162B2 (en) Method for detecting an invisible mark on a card
CN111885371A (en) Image occlusion detection method and device, electronic equipment and computer readable medium
CN116380918A (en) Defect detection method, device and equipment
US20210034881A1 (en) Monitoring method, apparatus and system, electronic device, and computer readable storage medium
JP6874315B2 (en) Information processing equipment, information processing methods and programs
KR100827133B1 (en) Method and apparatus for distinguishment of 3d image in mobile communication terminal
KR20150009842A (en) System for testing camera module centering and method for testing camera module centering using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant