CN112115852A - Living body detection method using RGB infrared camera - Google Patents

Living body detection method using RGB infrared camera Download PDF

Info

Publication number
CN112115852A
CN112115852A CN202010977725.1A CN202010977725A CN112115852A CN 112115852 A CN112115852 A CN 112115852A CN 202010977725 A CN202010977725 A CN 202010977725A CN 112115852 A CN112115852 A CN 112115852A
Authority
CN
China
Prior art keywords
face
rgb
image
attack
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010977725.1A
Other languages
Chinese (zh)
Inventor
安民洙
姜贺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Light Speed Intelligent Equipment Co.,Ltd.
Tenghui Technology Building Intelligence (Shenzhen) Co.,Ltd.
Original Assignee
Guangdong Light Speed Intelligent Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Light Speed Intelligent Equipment Co ltd filed Critical Guangdong Light Speed Intelligent Equipment Co ltd
Priority to CN202010977725.1A priority Critical patent/CN112115852A/en
Publication of CN112115852A publication Critical patent/CN112115852A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A liveness detection method using an RGB infrared camera, comprising: s1, acquiring an RGB image and an infrared image; s2, detecting the face frame position of the RGB image obtained in S1, intercepting the face position of the RGB image, and obtaining the face of the RGB image; s3, intercepting an area corresponding to the RGB face position in the RGB image in the infrared image to obtain an infrared face; s4, performing screen attack judgment on the infrared human face obtained in the S3, and judging whether the infrared human face is considered as screen attack or not; s5, when the face is not regarded as the screen attack in the S4, photo attack judgment is carried out on the RGB face, and whether the face is regarded as the photo attack is judged; and S6, if the photo attack is not considered in the S5, detecting the living body. The invention does not need the user to do some redundant actions for matching, so that the user experience is good; there are not too many additional sensors added and therefore there is no cost increase due to optimization of safety, speed and user experience.

Description

Living body detection method using RGB infrared camera
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a living body detection method using an RGB infrared camera.
Background
The living body detection is a technology for judging whether the currently detected face is a living body, and is a more key technology in face recognition. The main in vivo detection technologies at present mainly include: motion-coupled liveness detection, silent liveness detection, infrared liveness detection, and 3D structured light liveness detection.
Motion-coupled biopsy: given a specified action requirement, the user needs to complete the matching, and whether the living body exists is judged by detecting the states of the eyes, the mouth and the head of the user in real time. The method has the advantages of high accuracy and wide application; but also has obvious defects that the user needs high cooperation and the user experience is poor, an attacker can hollows the face and the mouth of a legal user to make corresponding instructions, and the deception cost is low.
Silent live body detection: compared with a dynamic living body detection method, the silent living body detection method requires any action of a user, and the user can naturally face the camera for 3 seconds and 4 seconds. Since the real face is not absolutely still, there are micro-expressions such as the rhythm of the eyelid and eyeball, blinking, stretching of the lips and peripheral cheeks, etc., which can be used to counter-cheat. The method does not need the cooperation of users, has good user experience, and can effectively prevent the attack of photos and videos; but requires a long waiting time for the user and this method is greatly affected by the light.
Near-infrared living body detection: the living body judgment at night or under the condition of no natural light is realized by utilizing the near infrared imaging principle. The imaging characteristics (such as the fact that a screen cannot be imaged, different materials have different reflectivities and the like) of the method can realize high-robustness living body judgment. The method does not need the cooperation of users, and can effectively prevent the attack of photos and videos. And the success rate of recognition is high, and the influence of light is small.
3D structured light biopsy: based on a 3D structured light imaging principle, a depth image is constructed through light reflected by the surface of a human face, whether a target object is a living body or not is judged, and attacks such as pictures, videos, screens and molds can be defended effectively. The 3D structured light can identify shielding, makeup and the like, has good adaptability under the condition of night, is slightly influenced by light, and can completely stop photo and video attacks.
However, the latter two methods are excellent in safety and user experience, but an additional sensor is required to be added for realizing the two kinds of detection, and the detection cost is high.
The four existing in-vivo detection methods cannot achieve good balance among user experience, safety and cost, and all have certain application defects.
At present, corresponding methods improve the precision of in-vivo detection by combining various algorithms, but most of the methods only aim at the same attack method for identification. In an actual scene, attack modes are various, and if each attack method is processed independently, time consumption is increased undoubtedly, so that a living body detection method oriented to the actual scene needs to be researched.
It can be seen that there are a number of problems with the prior art.
Disclosure of Invention
To this end, in order to solve the above-mentioned problems in the prior art, the present invention proposes a biopsy method using an RGB infrared camera.
The invention solves the problems through the following technical means:
a liveness detection method using an RGB infrared camera, comprising:
s1, acquiring an RGB image and an infrared image;
s2, detecting the face frame position of the RGB image obtained in S1, intercepting the face position of the RGB image, and obtaining the face of the RGB image;
s3, intercepting an area corresponding to the RGB face position in the RGB image in the infrared image to obtain an infrared face;
s4, performing screen attack judgment on the infrared human face obtained in the S3, and judging whether the infrared human face is considered as screen attack or not;
s5, when the face is not regarded as the screen attack in the S4, photo attack judgment is carried out on the RGB face, and whether the face is regarded as the photo attack is judged;
and S6, if the photo attack is not considered in the S5, detecting the living body.
Further, the acquiring the RGB faces in S2 includes:
s21, zooming the image of the intercepted RGB face position at any scale to generate an image pyramid, inputting an image with the size of 12 multiplied by 3 into a convolution network, and obtaining a characteristic layer with the size of 1 multiplied by 24 through three convolution layers;
s22, sending the 1 × 1 × 24 layers as feature vectors into three branches, namely a 1 × 1 × 2 face classification layer, a 1 × 1 × 4 bounding box regression layer and a 1 × 1 × 2 texture classification layer;
s23, inputting the image pyramid into the network, when the size of the zoom image is not 12 x 12, the obtained output is m x 2 and m x 4, and pushing back the 12 x 12 corresponding to each result to the position on the zoom image;
s24, extracting face scores from the face classification layer obtained in the S22, reserving bounding boxes with the scores higher than an initial preset threshold value, and then performing non-maximum value suppression on the bounding boxes detected on all the zoomed images;
s25, cutting the bounding box in the S24 from the original input image of S1, and zooming to 24 × 24 × 3;
s26, obtaining feature vectors with the length of 128 through 5 convolutional layers according to the results processed by the S25, sending the feature vectors into a face classification branch and a bounding box regression branch to obtain a face score and a bounding box, discarding the bounding box with the score lower than a threshold value, and performing non-maximum suppression on the remaining bounding box;
s27, cutting the bounding box obtained in S26 from the original image input in S1, and zooming to 48 × 48 × 3;
s28, obtaining 256-dimensional feature vectors through 6 convolutional layers according to the result processed by the S27, and sending the feature vectors into a face classification branch and a bounding box regression branch;
s29, reserving bounding boxes with scores exceeding a threshold value, and combining the bounding boxes by using non-maximum suppression;
and S210, reserving the face with the largest area in all the faces processed in the step S29 as a finally detected face, namely an RGB face.
Further, in S4, a two-classifier is used to determine the screen attack.
Further, in S4, the obtained infrared face is input into the face detection network to perform score calculation, and if the score exceeds a preset threshold, the screen attack is not considered, otherwise, the screen attack is considered.
Further, in S5, a light-weight neural network is used to determine the photo attack.
Further, the lightweight neural network includes:
s51, scaling the RGB face at any scale, inputting 48 × 48 × 3 by a network, performing convolution operation with a 3 × 3 step size of 1, then performing convolution operation with a 3 × 3 step size of 2 to reduce the image size, and outputting 23 × 23 × 32;
s52, performing convolution operation with step size of 1 of 3 × 3 on the output result of S51, then performing convolution operation with step size of 2 of 3 × 3 to reduce the image size, and outputting 10 × 10 × 64;
s53, performing convolution operation with step size 1 of 3 × 3 on the output result of S52, then performing convolution operation with step size 2 of 5 × 5 to reduce the image size, and outputting 4 × 4 × 64;
s54, performing convolution of 1 × 1 on the output result of the S53 to output 3 × 3 × 128;
s55, carrying out full connection layer on the output result of S54 to obtain a 256-dimensional output vector;
and S56, carrying out a full connection layer on the 256-dimensional output vector of the S55, outputting a face detection score, and if the positive face score is lower than a threshold value, judging whether the picture attack is considered.
The invention can simultaneously realize the identification of the screen attack and the identification of the photo attack, thereby greatly improving the speed of the whole process. Meanwhile, the photo attack can be effectively identified through the lightweight neural network.
The invention adopts a parallel detection mode of RGB image and infrared image, which improves the detection safety and ensures the detection speed; firstly, a simple classifier is used for judging screen attack, and then photo attack judgment (light weight neural network) is carried out after judgment is passed.
The invention does not need the user to do some redundant actions for matching, so that the user experience is good; there are not too many additional sensors added and therefore there is no cost increase due to optimization of safety, speed and user experience.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of a liveness detection method using an RGB infrared camera of the present invention;
FIG. 2 is a flow chart of the method for detecting living body using RGB infrared camera according to the present invention;
FIG. 3 is a flow chart of a lightweight neural network in a liveness detection method using an RGB infrared camera according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. It should be noted that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments, and all other embodiments obtained by those skilled in the art without any inventive work are within the scope of the present invention.
Examples
As shown in fig. 1, a liveness detection method using an RGB infrared camera includes a detection network including:
s1, acquiring an RGB image and an infrared image; and acquiring corresponding RGB images and infrared images through the RGB infrared camera.
S2, detecting the face frame position of the RGB image obtained in S1, intercepting the face position of the RGB image, and obtaining the face of the RGB image; preferably, as shown in fig. 2, the acquiring of the RGB faces in S2 includes:
s21, zooming the image of the intercepted RGB face position at any scale to generate an image pyramid, inputting an image with the size of 12 multiplied by 3 into a convolution network, and obtaining a characteristic layer with the size of 1 multiplied by 24 through three convolution layers;
s22, sending the 1 × 1 × 24 layers as feature vectors into three branches, namely a 1 × 1 × 2 face classification layer, a 1 × 1 × 4 bounding box regression layer and a 1 × 1 × 2 texture classification layer;
s23, inputting the image pyramid into the network, when the size of the zoom image is not 12 x 12, the obtained output is m x 2 and m x 4, and pushing back the 12 x 12 corresponding to each result to the position on the zoom image; this is equivalent to making a sliding window over the zoomed image.
S24, extracting face scores from the face classification layer obtained in the S22, reserving bounding boxes with the scores higher than an initial preset threshold value, and then performing non-maximum value suppression on the bounding boxes detected on all the zoomed images;
s25, cutting the bounding box in the S24 from the original image, and zooming to 24 multiplied by 3;
s26, obtaining feature vectors with the length of 128 through 5 convolutional layers according to the results processed by the S25, sending the feature vectors into a face classification branch and a bounding box regression branch to obtain a face score and a bounding box, discarding the bounding box with the score lower than a threshold value, and performing non-maximum suppression on the remaining bounding box;
s27, cutting the bounding box obtained in S26 from the original image input in S1, and zooming to 48 × 48 × 3;
s28, obtaining 256-dimensional feature vectors through 6 convolutional layers according to the result processed by the S27, and sending the feature vectors into a face classification branch and a bounding box regression branch;
s29, reserving bounding boxes with scores exceeding a threshold value, and combining the bounding boxes by using non-maximum suppression;
and S210, reserving the face with the largest area in all the faces processed in the step S29 as a finally detected face, namely an RGB face.
S3, intercepting an area corresponding to the RGB face position in the RGB image in the infrared image to obtain an infrared face;
and S4, performing screen attack judgment on the infrared human face obtained in the S3, and judging whether the infrared human face is considered as screen attack or not. Preferably, in S4, a two-classifier is used to determine the screen attack. Preferably, in S4, the acquired infrared face is input to the detection network to calculate a score, and if the score exceeds a preset threshold, the screen attack is not considered to be the screen attack, and otherwise, the screen attack is considered to be the screen attack.
It should be noted that the screen is imaged dark under the infrared camera, and the discrimination between the screen and the human face is high, while the discrimination between the picture and the human face under the infrared camera is low, so that whether the screen attack is performed or not can be distinguished according to the characteristics of the screen under the infrared camera. And inputting the picture of the face region intercepted on the infrared image into a detection network to calculate a score, if the score exceeds a set threshold value, determining that the face exists in the intercepted region, and judging the next step, otherwise, determining the intercepted region as a screen attack.
S5, when the face is not regarded as the screen attack in the S4, photo attack judgment is carried out on the RGB face, and whether the face is regarded as the photo attack is judged; preferably, in S5, a light-weight neural network is used to determine the photo attack. Preferably, as shown in fig. 3, the lightweight neural network includes:
s51, scaling the RGB face at any scale, inputting 48 × 48 × 3 by a network, performing convolution operation with a 3 × 3 step size of 1, then performing convolution operation with a 3 × 3 step size of 2 to reduce the image size, and outputting 23 × 23 × 32;
s52, performing convolution operation with step size of 1 of 3 × 3 on the output result of S51, then performing convolution operation with step size of 2 of 3 × 3 to reduce the image size, and outputting 10 × 10 × 64;
s53, performing convolution operation with step size 1 of 3 × 3 on the output result of S52, then performing convolution operation with step size 2 of 5 × 5 to reduce the image size, and outputting 4 × 4 × 64;
s54, performing convolution of 1 × 1 on the output result of the S53 to output 3 × 3 × 128;
s55, carrying out full connection layer on the output result of S54 to obtain a 256-dimensional output vector;
and S56, carrying out a full connection layer on the 256-dimensional output vector of the S55, outputting a face detection score, and if the positive face score is lower than a threshold value, judging whether the picture attack is considered.
The photo attack image is difficult to judge compared with a real human face under an infrared camera, so that high-level features of the photo attack image and the real human face need to be acquired for judgment. The image space features which are difficult to distinguish can be converted into high-dimensional features through the lightweight neural network, so that the photo attack is judged, and whether the photo is a public product or a real face is judged.
And S6, if the photo attack is not considered in the S5, detecting the living body.
At this point, the entire biopsy procedure is completed.
It can be seen that the in-vivo detection method using the RGB infrared camera provided by the invention can simultaneously realize the identification of screen attack and the identification of photo attack, and the speed of the whole process is greatly improved. Meanwhile, the photo attack can be effectively identified through the lightweight neural network.
The invention adopts a parallel detection mode of RGB image and infrared image, which improves the detection safety and ensures the detection speed; firstly, a simple classifier is used for judging screen attack, and then photo attack judgment (light weight neural network) is carried out after judgment is passed.
The invention does not need the user to do some redundant actions for matching, so that the user experience is good; there are not too many additional sensors added and therefore there is no cost increase due to optimization of safety, speed and user experience.
Reference throughout this specification to "one embodiment," "another embodiment," "an embodiment," "a preferred embodiment," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment described generally in this application. The appearances of the same phrase in various places in the specification are not necessarily all referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with any embodiment, it is submitted that it is within the purview of one skilled in the art to effect such feature, structure, or characteristic in connection with other ones of the embodiments. Although the invention has been described herein with reference to a number of illustrative examples thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the scope and spirit of the principles of this disclosure. More specifically, other uses will be apparent to those skilled in the art in view of variations and modifications in the subject matter incorporating the components and/or arrangement of the arrangement within the scope of the disclosure, drawings and claims hereof.

Claims (6)

1. A liveness detection method using an RGB infrared camera, comprising a detection network, characterized by comprising:
s1, acquiring an RGB image and an infrared image;
s2, detecting the face frame position of the RGB image obtained in S1, intercepting the face position of the RGB image, and obtaining the face of the RGB image;
s3, intercepting an area corresponding to the RGB face position in the RGB image in the infrared image to obtain an infrared face;
s4, performing screen attack judgment on the infrared human face obtained in the S3, and judging whether the infrared human face is considered as screen attack or not;
s5, when the face is not regarded as the screen attack in the S4, photo attack judgment is carried out on the RGB face, and whether the face is regarded as the photo attack is judged;
and S6, if the photo attack is not considered in the S5, detecting the living body.
2. The live body detection method using an RGB infrared camera according to claim 1, wherein the acquiring of the RGB faces in S2 includes:
s21, zooming the image of the intercepted RGB face position at any scale to generate an image pyramid, inputting an image with the size of 12 multiplied by 3 into a convolution network, and obtaining a characteristic layer with the size of 1 multiplied by 24 through three convolution layers;
s22, sending the 1 × 1 × 24 layers as feature vectors into three branches, namely a 1 × 1 × 2 face classification layer, a 1 × 1 × 4 bounding box regression layer and a 1 × 1 × 2 texture classification layer;
s23, inputting the image pyramid into the network, when the size of the zoom image is not 12 x 12, the obtained output is m x 2 and m x 4, and pushing back the 12 x 12 corresponding to each result to the position on the zoom image;
s24, extracting face scores from the face classification layer obtained in the S22, reserving bounding boxes with the scores higher than an initial preset threshold value, and then performing non-maximum value suppression on the bounding boxes detected on all the zoomed images;
s25, cutting the bounding box in the S24 from the original image, and zooming to 24 multiplied by 3;
s26, obtaining feature vectors with the length of 128 through 5 convolutional layers according to the results processed by the S25, sending the feature vectors into a face classification branch and a bounding box regression branch to obtain a face score and a bounding box, discarding the bounding box with the score lower than a threshold value, and performing non-maximum suppression on the remaining bounding box;
s27, cutting the bounding box obtained in S26 from the original image input in S1, and zooming to 48 × 48 × 3;
s28, obtaining 256-dimensional feature vectors through 6 convolutional layers according to the result processed by the S27, and sending the feature vectors into a face classification branch and a bounding box regression branch;
s29, reserving bounding boxes with scores exceeding a threshold value, and combining the bounding boxes by using non-maximum suppression;
and S210, reserving the face with the largest area in all the faces processed in the step S29 as a finally detected face, namely an RGB face.
3. The biopsy method according to claim 1, wherein the judgment of the screen attack is performed by using a two-classifier in the step S4.
4. The live body detection method using an RGB infrared camera as claimed in claim 3, wherein in the S4, the obtained infrared face is inputted into the face detection network to calculate a score, and when the score exceeds a preset threshold, the live body detection method is not considered as a screen attack, and otherwise, the live body detection method is considered as a screen attack.
5. The biopsy method according to claim 1, wherein the photo attack is determined in S5 by using a lightweight neural network.
6. The liveness detection method using an RGB infrared camera according to claim 5, wherein said lightweight neural network comprises:
s51, scaling the RGB face at any scale, inputting 48 × 48 × 3 by a network, performing convolution operation with a 3 × 3 step size of 1, then performing convolution operation with a 3 × 3 step size of 2 to reduce the image size, and outputting 23 × 23 × 32;
s52, performing convolution operation with step size of 1 of 3 × 3 on the output result of S51, then performing convolution operation with step size of 2 of 3 × 3 to reduce the image size, and outputting 10 × 10 × 64;
s53, performing convolution operation with step size 1 of 3 × 3 on the output result of S52, then performing convolution operation with step size 2 of 5 × 5 to reduce the image size, and outputting 4 × 4 × 64;
s54, performing convolution of 1 × 1 on the output result of the S53 to output 3 × 3 × 128;
s55, carrying out full connection layer on the output result of S54 to obtain a 256-dimensional output vector;
and S56, carrying out a full connection layer on the 256-dimensional output vector of the S55, outputting a face detection score, and if the positive face score is lower than a threshold value, judging whether the picture attack is considered.
CN202010977725.1A 2020-09-17 2020-09-17 Living body detection method using RGB infrared camera Pending CN112115852A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010977725.1A CN112115852A (en) 2020-09-17 2020-09-17 Living body detection method using RGB infrared camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010977725.1A CN112115852A (en) 2020-09-17 2020-09-17 Living body detection method using RGB infrared camera

Publications (1)

Publication Number Publication Date
CN112115852A true CN112115852A (en) 2020-12-22

Family

ID=73803243

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010977725.1A Pending CN112115852A (en) 2020-09-17 2020-09-17 Living body detection method using RGB infrared camera

Country Status (1)

Country Link
CN (1) CN112115852A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801057A (en) * 2021-04-02 2021-05-14 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298230A (en) * 2019-05-06 2019-10-01 深圳市华付信息技术有限公司 Silent biopsy method, device, computer equipment and storage medium
CN110472519A (en) * 2019-07-24 2019-11-19 杭州晟元数据安全技术股份有限公司 A kind of human face in-vivo detection method based on multi-model
CN110659617A (en) * 2019-09-26 2020-01-07 杭州艾芯智能科技有限公司 Living body detection method, living body detection device, computer equipment and storage medium
CN110728215A (en) * 2019-09-26 2020-01-24 杭州艾芯智能科技有限公司 Face living body detection method and device based on infrared image
CN111046703A (en) * 2018-10-12 2020-04-21 杭州海康威视数字技术股份有限公司 Face anti-counterfeiting detection method and device and multi-view camera
CN111582238A (en) * 2020-05-28 2020-08-25 上海依图网络科技有限公司 Living body detection method and device applied to face shielding scene

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046703A (en) * 2018-10-12 2020-04-21 杭州海康威视数字技术股份有限公司 Face anti-counterfeiting detection method and device and multi-view camera
CN110298230A (en) * 2019-05-06 2019-10-01 深圳市华付信息技术有限公司 Silent biopsy method, device, computer equipment and storage medium
CN110472519A (en) * 2019-07-24 2019-11-19 杭州晟元数据安全技术股份有限公司 A kind of human face in-vivo detection method based on multi-model
CN110659617A (en) * 2019-09-26 2020-01-07 杭州艾芯智能科技有限公司 Living body detection method, living body detection device, computer equipment and storage medium
CN110728215A (en) * 2019-09-26 2020-01-24 杭州艾芯智能科技有限公司 Face living body detection method and device based on infrared image
CN111582238A (en) * 2020-05-28 2020-08-25 上海依图网络科技有限公司 Living body detection method and device applied to face shielding scene

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801057A (en) * 2021-04-02 2021-05-14 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN112801057B (en) * 2021-04-02 2021-07-13 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
US20200005061A1 (en) Living body detection method and system, computer-readable storage medium
JP4307496B2 (en) Facial part detection device and program
US6504944B2 (en) Image recognition apparatus and method
CN108182409B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
KR100608595B1 (en) Face identifying method and apparatus
KR102147052B1 (en) Emotional recognition system and method based on face images
CN104598897B (en) Visual sensor, image processing method and device, visual interactive equipment
CN104584531B (en) Image processing apparatus and image display device
CN105072327B (en) A kind of method and apparatus of the portrait processing of anti-eye closing
CN110348270B (en) Image object identification method and image object identification system
CN110837750B (en) Face quality evaluation method and device
JP2005149144A (en) Object detection device, object detection method, and recording medium
CN111523344B (en) Human body living body detection system and method
CN106881716A (en) Human body follower method and system based on 3D cameras robot
CN111967319B (en) Living body detection method, device, equipment and storage medium based on infrared and visible light
CN112818722A (en) Modular dynamically configurable living body face recognition system
CN111382592A (en) Living body detection method and apparatus
CN107862298B (en) Winking living body detection method based on infrared camera device
CN115131880A (en) Multi-scale attention fusion double-supervision human face in-vivo detection method
CN112434647A (en) Human face living body detection method
CN112115852A (en) Living body detection method using RGB infrared camera
Song et al. Face liveness detection based on joint analysis of rgb and near-infrared image of faces
CN106778576A (en) A kind of action identification method based on SEHM feature graphic sequences
CN106599779A (en) Human ear recognition method
CN109986553B (en) Active interaction robot, system, method and storage device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210831

Address after: 701, 7 / F, No. 60, Chuangxin Third Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, 519000

Applicant after: Guangdong Light Speed Intelligent Equipment Co.,Ltd.

Applicant after: Tenghui Technology Building Intelligence (Shenzhen) Co.,Ltd.

Address before: 701, 7 / F, No. 60, Chuangxin Third Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, 519000

Applicant before: Guangdong Light Speed Intelligent Equipment Co.,Ltd.

TA01 Transfer of patent application right