CN111680588A - Human face gate living body detection method based on visible light and infrared light - Google Patents
Human face gate living body detection method based on visible light and infrared light Download PDFInfo
- Publication number
- CN111680588A CN111680588A CN202010456278.5A CN202010456278A CN111680588A CN 111680588 A CN111680588 A CN 111680588A CN 202010456278 A CN202010456278 A CN 202010456278A CN 111680588 A CN111680588 A CN 111680588A
- Authority
- CN
- China
- Prior art keywords
- face
- infrared
- visible light
- living body
- human face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/30—Individual registration on entry or exit not involving the use of a pass
- G07C9/32—Individual registration on entry or exit not involving the use of a pass in combination with an identity check
- G07C9/37—Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention discloses a human face gate living body detection method based on visible light and infrared light, which includes the steps of intercepting images of human face parts of visible light and infrared pictures, inputting the images into a multitask convolution neural network to carry out human face detection, inputting the visible light and the infrared images with human faces into a human face dense key point deep learning model, and outputting human face dense key point coordinates; calculating feature vectors of dense key points of the human face and three-dimensional feature vectors of the human face, and training a living body detection classifier by using an SVM (support vector machine); training a deep learning model by using an infrared face image lightweight neural network according to the texture characteristics; and judging the identity information corresponding to the real face according to the living body detection classifier, the trained deep learning model and the face recognition model, and determining whether the gate is opened or not. The invention distinguishes the real face from the fake face from the depth information and the texture, obviously improves the accuracy rate of the living body detection, has small calculated amount and can meet the real-time property.
Description
Technical Field
The invention belongs to the technical field of biological and feature recognition, and particularly relates to a human face gate living body detection method based on visible light and infrared light.
Background
Due to the characteristics of good uniqueness, naturalness, non-contact property, convenience in acquisition and the like, the face features are widely applied to the field of intelligent identity verification such as finance and security. Along with the rapid development of the face technology, more and more gate systems are added with face recognition systems, so that the efficiency can be obviously improved compared with a common gate, and convenience is brought. However, the number of forged faces such as face photo printing, face video playback, three-dimensional masks and the like is infinite, and in order to ensure the safety of the system, the face living body detection is more and more important in a face recognition system.
Currently, three main methods exist for in vivo detection:
1. the human-system interactive living body detection requires that a user blinks, shakes, opens the mouth and the like according to system prompt information, is long in living body detection process and not friendly enough in experience, is generally mainly used for mobile phone app real-name verification, and is not applicable to the method for the face gate.
2. In-vivo detection based on a monocular visible light camera, the image is two-dimensional, and the monocular visible light image is easily influenced by the illumination environment, so that the effect is unstable.
3. The method distinguishes whether a living body exists or not through 3D structured light imaging based on living body detection of a depth camera, and the defect that the depth camera is high in cost is caused.
Disclosure of Invention
The invention provides a human face gate live body detection method based on visible light and infrared light, which distinguishes a real human face from a fake human face from two aspects of depth information and texture, obviously improves the live body detection accuracy, has small calculated amount and can meet the real-time property.
The technical scheme of the invention is realized as follows:
a human face gate living body detection method based on visible light and infrared light specifically comprises the following steps:
s1, acquiring a picture: simultaneously acquiring picture frames from the visible light video stream and the infrared video stream, and respectively intercepting pictures with set sizes in the center areas of the visible light and the infrared pictures;
s2, face detection: detecting a face in the intercepted visible light picture by using a multitask convolutional neural network, carrying out region expansion on a detected visible light face frame, cutting the intercepted infrared picture according to the expanded face frame, inputting the cut infrared image into the multitask convolutional neural network for face detection, and returning to the step S1 until the face is detected if the face is not detected;
s3, key point detection: inputting visible light and infrared images with the face into a face dense key point deep learning model, and outputting face dense key point coordinates;
s4, in vivo detection model: calculating feature vectors of dense key points of the human face and three-dimensional feature vectors of the human face, and training a living body detection classifier by using an SVM (support vector machine);
s5, infrared human face model: training a deep learning model by using an infrared face image lightweight neural network according to the texture characteristics;
s6, comprehensive judgment: and judging the identity information corresponding to the real face according to the living body detection classifier, the trained deep learning model and the face recognition model, and determining whether the gate is opened or not.
As a preferred embodiment of the present invention, the region expansion of the detected visible light face frame specifically means that the region area expansion of the detected visible light face frame is performed by 1.2 to 1.5 times.
As a preferred embodiment of the present invention, the visible light and the infrared image in step S3 output 1000 face dense keypoint coordinates, respectively.
As a preferred embodiment of the present invention, step S4 specifically includes:
s401, calculating the relative distance between the infrared image and 1000 key point coordinates of the visible light face in the x direction to form a 1000-dimensional feature vector;
s402, calculating a pitch angle, a deflection angle and a roll angle of the human face to form a three-dimensional characteristic vector according to distance proportions of two eye key points, two mouth angle key points and one nose key point in the human face key points;
and S403, fusing the two feature vectors to form a 1003-dimensional vector, collecting a true and false face data set, and training a living body detection classifier by using an SVM (support vector machine).
As a preferred embodiment of the present invention, the step S5 of training the deep learning model with the infrared face image lightweight neural network according to the texture features specifically refers to training the deep learning model with the infrared face image lightweight neural network FeatherNetB according to the texture features.
As a preferred embodiment of the present invention, step S6 specifically includes:
s601, obtaining the cut visible light picture and the cut infrared picture, inputting the visible light picture and the cut infrared picture into a living body detection classifier, restarting the detection of a new pair of picture frames if the output result is a fake face, and executing the next step if the output result is a real face;
s602, inputting the trained deep learning model, if the confidence of the fake face is greater than a threshold value, determining the fake face, and restarting the detection of a new pair of image frames, otherwise determining the fake face;
s603, inputting the face recognition model, judging the identity information corresponding to the real face, and determining whether the gate is opened.
The invention has the beneficial effects that: the depth information formed by dense key points of the infrared face and the visible light face image and the infrared image texture with low light environment sensitivity are simultaneously engraved from the two cameras, a light-weight model is used for distinguishing a real face from a fake face, the living body detection accuracy of the face gate is improved, meanwhile, the low cost is ensured, the calculated amount is small, and the real-time performance can be met.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic view of a visible and infrared camera imaging plane;
FIG. 2 is a schematic structural diagram of a neural network FeatherNet B;
fig. 3 is a flowchart of a method for detecting a living body of a face gate based on visible light and infrared light according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the invention provides a human face gate live detection method based on visible light and infrared light, which specifically comprises the following steps:
s1, acquiring a picture: simultaneously acquiring picture frames from the visible light video stream and the infrared video stream, and respectively intercepting pictures with set sizes in the center areas of the visible light and the infrared pictures;
s2, face detection: detecting a face in the intercepted visible light picture by using a multitask convolutional neural network, carrying out region expansion on a detected visible light face frame, cutting the intercepted infrared picture according to the expanded face frame, inputting the cut infrared image into the multitask convolutional neural network for face detection, and returning to the step S1 until the face is detected if the face is not detected;
firstly, a MTCNN (Multi-task convolutional neural network) detector is used for detecting a visible light face, and face frame coordinates are regressed through three layers of neural networks in a cascading mode, and as the visible light camera and the infrared camera face are offset in an image, 1.2-1.5 times of area expansion is carried out on the detected visible light face frame, and the expansion is used for redundancy of imaging position offset of the two cameras. Then, on the infrared light image, the image is cut out based on the expanded face frame, and then a face is detected on the cut-out infrared image using an MTCNN detector. And if the human face is not detected on the infrared cutting image, restarting a new round of detection. Due to the light reflection effect of the screen, the step can filter out fake human faces of a part of video playback.
S3, key point detection: inputting visible light and infrared images with the face into a face dense key point deep learning model, and outputting face dense key point coordinates; detecting the cut visible light and infrared faces by using a dense face key point deep learning model respectively, wherein the output of the model is 2000 dimensional vectors which correspond to 1000 key point coordinates (x) of the face of the visible light image and the infrared light image respectively1,y1),(x2,y2)…(x1000,y1000)。
S4, in vivo detection model: calculating feature vectors of dense key points of the human face and three-dimensional feature vectors of the human face, and training a living body detection classifier by using an SVM (support vector machine);
two points P of key point P in human face on two camera imaging planesl,PrRespectively is (x)l,yl),(xr,yr)。
According to a binocular ranging formula:
wherein f is the focal length of the camera, T is the distance between the two cameras, and the two parameters can be regarded as a fixed coefficient. So (x)l-xr) The depth information of the key points of the human face can be reflected.
Step S4 specifically includes:
s401, calculating the relative distance between the infrared image and 1000 key point coordinates of the visible light face in the x direction to form a 1000-dimensional feature vector;
s402, calculating a pitch angle, a deflection angle and a roll angle of the human face to form a three-dimensional characteristic vector according to distance proportions of two eye key points, two mouth angle key points and one nose key point in the human face key points;
and S403, fusing the two feature vectors to form a 1003-dimensional vector, collecting a true and false face data set, training a living body detection classifier by using an SVM (support vector machine), wherein the basic idea of SVM learning is to solve a separation hyperplane which can correctly divide the training data set and has the largest geometric interval. The trained model can distinguish a plane face from depth information.
S5, infrared human face model: training a deep learning model by using an infrared face image lightweight neural network according to the texture characteristics;
video playback and plane paper human faces can be well recognized from depth information through key point characteristics. Some fake faces contain depth information such as masks and the like. The infrared face image is used for training a deep learning model by using a lightweight neural network FeatherNet B according to texture characteristics, and the three-dimensional fake face can be well recognized. Because the infrared camera is less influenced by the illumination environment, the robustness of the model is improved by using the infrared image.
The FeatherNet B inputs the size of 224 multiplied by 224 pictures, and the model is formed by combining neural network sub-blocks Block A, Block B and Block C.
Normalizing the cut infrared face picture to 224 multiplied by 224, inputting the cut infrared face picture into trained FeatherNet B, outputting the probability vectors of a real face and a fake face, and comparing the threshold values to judge the probabilities of the real face and the fake face. The neural network model can learn better texture details, so that the accuracy of in-vivo detection is improved.
S6, comprehensive judgment: and judging the identity information corresponding to the real face according to the living body detection classifier, the trained deep learning model and the face recognition model, and determining whether the gate is opened or not.
Step S6 specifically includes:
s601, obtaining the cut visible light picture and the cut infrared picture, inputting the visible light picture and the cut infrared picture into a living body detection classifier, restarting the detection of a new pair of picture frames if the output result is a fake face, and executing the next step if the output result is a real face;
s602, inputting the trained deep learning model, if the confidence of the fake face is greater than a threshold value, determining the fake face, and restarting the detection of a new pair of image frames, otherwise determining the fake face;
s603, inputting the face recognition model, judging the identity information corresponding to the real face, and determining whether the gate is opened.
And judging according to the result of the three-dimensional information SVM classification model of the feature points of the visible light image, if the result is negative, determining that the face is a fake face, and restarting the detection of a new pair of image frames. If the confidence coefficient of the fake face is larger than the threshold value, the fake face is determined, the new detection of the pair of image frames is restarted, and otherwise, the fake face is determined. And finally, inputting the face image of the real face into a face recognition model, and judging whether the face is the real face and the confidence coefficient is larger than a threshold value, so that whether the gate is opened or not is determined.
According to the scheme, a 3D distance formula of an object is calculated according to a binocular camera, dense point face key points are used for detecting face key points which simultaneously obtain an infrared image and a visible light image, a multi-dimensional feature vector is formed by calculating relative distances and face pitch angles, deflection angles and roll angles, and real faces and plane pictures and a video playback classification model of a playing device are trained through an SVM to distinguish real and false faces from depth information.
According to the scheme, according to the characteristic that the infrared image is slightly influenced by illumination, the infrared camera is used for obtaining the real face image and the attack face image, and the lightweight deep learning network FeatherNet B training probability model is adopted for distinguishing the real face texture from the false face texture.
According to the scheme, the real face and the attack face are distinguished from the depth information and the texture, the living body detection accuracy rate is obviously improved, meanwhile, the calculated amount is small, and the real-time performance can be met.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (6)
1. A human face gate living body detection method based on visible light and infrared light is characterized by comprising the following steps:
s1, acquiring a picture: simultaneously acquiring picture frames from the visible light video stream and the infrared video stream, and respectively intercepting pictures with set sizes in the center areas of the visible light and the infrared pictures;
s2, face detection: detecting a face in the intercepted visible light picture by using a multitask convolutional neural network, carrying out region expansion on a detected visible light face frame, cutting the intercepted infrared picture according to the expanded face frame, inputting the cut infrared image into the multitask convolutional neural network for face detection, and returning to the step S1 until the face is detected if the face is not detected;
s3, key point detection: inputting visible light and infrared images with the face into a face dense key point deep learning model, and outputting face dense key point coordinates;
s4, in vivo detection model: calculating feature vectors of dense key points of the human face and three-dimensional feature vectors of the human face, and training a living body detection classifier by using an SVM (support vector machine);
s5, infrared human face model: training a deep learning model by using an infrared face image lightweight neural network according to the texture characteristics;
s6, comprehensive judgment: and judging the identity information corresponding to the real face according to the living body detection classifier, the trained deep learning model and the face recognition model, and determining whether the gate is opened or not.
2. The living body detection method of the human face gate based on visible light and infrared light as claimed in claim 1, wherein the performing of the region expansion on the detected visible light human face frame specifically means performing the region area expansion of 1.2-1.5 times on the detected visible light human face frame.
3. The method for live detection of the face gate based on visible light and infrared light as claimed in claim 1, wherein the visible light and infrared images in step S3 output 1000 dense key point coordinates of the face respectively.
4. The method for detecting the living body of the face gate based on the visible light and the infrared light as claimed in claim 3, wherein the step S4 specifically includes:
s401, calculating the relative distance between the infrared image and 1000 key point coordinates of the visible light face in the x direction to form a 1000-dimensional feature vector;
s402, calculating a pitch angle, a deflection angle and a roll angle of the human face to form a three-dimensional characteristic vector according to distance proportions of two eye key points, two mouth angle key points and one nose key point in the human face key points;
and S403, fusing the two feature vectors to form a 1003-dimensional vector, collecting a true and false face data set, and training a living body detection classifier by using an SVM (support vector machine).
5. The method for detecting the living body of the human face gate based on the visible light and the infrared light as claimed in claim 1, wherein the step S5 of training the deep learning model by the infrared human face image lightweight neural network according to the texture features specifically refers to training the deep learning model by the infrared human face image lightweight neural network FeatherNetB according to the texture features.
6. The method for detecting the living body of the face gate based on the visible light and the infrared light as claimed in claim 5, wherein the step S6 specifically includes:
s601, obtaining the cut visible light picture and the cut infrared picture, inputting the visible light picture and the cut infrared picture into a living body detection classifier, restarting the detection of a new pair of picture frames if the output result is a fake face, and executing the next step if the output result is a real face;
s602, inputting the trained deep learning model, if the confidence of the fake face is greater than a threshold value, determining the fake face, and restarting the detection of a new pair of image frames, otherwise determining the fake face;
s603, inputting the face recognition model, judging the identity information corresponding to the real face, and determining whether the gate is opened.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010456278.5A CN111680588A (en) | 2020-05-26 | 2020-05-26 | Human face gate living body detection method based on visible light and infrared light |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010456278.5A CN111680588A (en) | 2020-05-26 | 2020-05-26 | Human face gate living body detection method based on visible light and infrared light |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111680588A true CN111680588A (en) | 2020-09-18 |
Family
ID=72453904
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010456278.5A Pending CN111680588A (en) | 2020-05-26 | 2020-05-26 | Human face gate living body detection method based on visible light and infrared light |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111680588A (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112036376A (en) * | 2020-10-01 | 2020-12-04 | 深圳奥比中光科技有限公司 | Method and device for detecting infrared image and depth image and face recognition system |
CN112132046A (en) * | 2020-09-24 | 2020-12-25 | 天津锋物科技有限公司 | Static living body detection method and system |
CN112200057A (en) * | 2020-09-30 | 2021-01-08 | 汉王科技股份有限公司 | Face living body detection method and device, electronic equipment and storage medium |
CN112232204A (en) * | 2020-10-16 | 2021-01-15 | 中科智云科技有限公司 | Living body detection method based on infrared image |
CN112434647A (en) * | 2020-12-09 | 2021-03-02 | 浙江光珀智能科技有限公司 | Human face living body detection method |
CN112528949A (en) * | 2020-12-24 | 2021-03-19 | 杭州慧芯达科技有限公司 | Binocular face recognition method and system based on multiband light |
CN112800853A (en) * | 2020-12-30 | 2021-05-14 | 广东智媒云图科技股份有限公司 | Toy initialization method and device and toy |
CN112801038A (en) * | 2021-03-02 | 2021-05-14 | 重庆邮电大学 | Multi-view face living body detection method and system |
CN112818821A (en) * | 2021-01-28 | 2021-05-18 | 广州广电卓识智能科技有限公司 | Human face acquisition source detection method and device based on visible light and infrared light |
CN113298993A (en) * | 2021-04-29 | 2021-08-24 | 杭州魔点科技有限公司 | Personnel access management method and system based on face recognition |
CN113553928A (en) * | 2021-07-13 | 2021-10-26 | 厦门瑞为信息技术有限公司 | Human face living body detection method and system and computer equipment |
CN113642428A (en) * | 2021-07-29 | 2021-11-12 | 北京百度网讯科技有限公司 | Face living body detection method and device, electronic equipment and storage medium |
CN114333123A (en) * | 2021-12-13 | 2022-04-12 | 南京熊猫电子股份有限公司 | Gate passage detection method, device and medium based on laser ranging element group |
CN114564100A (en) * | 2021-11-05 | 2022-05-31 | 南京大学 | Free stereoscopic display hand-eye interaction method based on infrared guidance |
CN116110111A (en) * | 2023-03-23 | 2023-05-12 | 平安银行股份有限公司 | Face recognition method, electronic equipment and storage medium |
CN116580445A (en) * | 2023-07-14 | 2023-08-11 | 江西脑控科技有限公司 | Large language model face feature analysis method, system and electronic equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109558840A (en) * | 2018-11-29 | 2019-04-02 | 中国科学院重庆绿色智能技术研究院 | A kind of biopsy method of Fusion Features |
CN109815810A (en) * | 2018-12-20 | 2019-05-28 | 北京以萨技术股份有限公司 | A kind of biopsy method based on single camera |
CN110443192A (en) * | 2019-08-01 | 2019-11-12 | 中国科学院重庆绿色智能技术研究院 | A kind of non-interactive type human face in-vivo detection method and system based on binocular image |
CN110909617A (en) * | 2019-10-28 | 2020-03-24 | 广州多益网络股份有限公司 | Living body face detection method and device based on binocular vision |
-
2020
- 2020-05-26 CN CN202010456278.5A patent/CN111680588A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109558840A (en) * | 2018-11-29 | 2019-04-02 | 中国科学院重庆绿色智能技术研究院 | A kind of biopsy method of Fusion Features |
CN109815810A (en) * | 2018-12-20 | 2019-05-28 | 北京以萨技术股份有限公司 | A kind of biopsy method based on single camera |
CN110443192A (en) * | 2019-08-01 | 2019-11-12 | 中国科学院重庆绿色智能技术研究院 | A kind of non-interactive type human face in-vivo detection method and system based on binocular image |
CN110909617A (en) * | 2019-10-28 | 2020-03-24 | 广州多益网络股份有限公司 | Living body face detection method and device based on binocular vision |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112132046A (en) * | 2020-09-24 | 2020-12-25 | 天津锋物科技有限公司 | Static living body detection method and system |
CN112200057B (en) * | 2020-09-30 | 2023-10-31 | 汉王科技股份有限公司 | Face living body detection method and device, electronic equipment and storage medium |
CN112200057A (en) * | 2020-09-30 | 2021-01-08 | 汉王科技股份有限公司 | Face living body detection method and device, electronic equipment and storage medium |
CN112036376A (en) * | 2020-10-01 | 2020-12-04 | 深圳奥比中光科技有限公司 | Method and device for detecting infrared image and depth image and face recognition system |
CN112036376B (en) * | 2020-10-01 | 2024-05-31 | 奥比中光科技集团股份有限公司 | Method, device and face recognition system for detecting infrared image and depth image |
CN112232204A (en) * | 2020-10-16 | 2021-01-15 | 中科智云科技有限公司 | Living body detection method based on infrared image |
CN112434647A (en) * | 2020-12-09 | 2021-03-02 | 浙江光珀智能科技有限公司 | Human face living body detection method |
CN112528949A (en) * | 2020-12-24 | 2021-03-19 | 杭州慧芯达科技有限公司 | Binocular face recognition method and system based on multiband light |
CN112800853A (en) * | 2020-12-30 | 2021-05-14 | 广东智媒云图科技股份有限公司 | Toy initialization method and device and toy |
CN112818821A (en) * | 2021-01-28 | 2021-05-18 | 广州广电卓识智能科技有限公司 | Human face acquisition source detection method and device based on visible light and infrared light |
CN112818821B (en) * | 2021-01-28 | 2023-02-03 | 广州广电卓识智能科技有限公司 | Human face acquisition source detection method and device based on visible light and infrared light |
CN112801038A (en) * | 2021-03-02 | 2021-05-14 | 重庆邮电大学 | Multi-view face living body detection method and system |
CN112801038B (en) * | 2021-03-02 | 2022-07-22 | 重庆邮电大学 | Multi-view face in-vivo detection method and system |
CN113298993A (en) * | 2021-04-29 | 2021-08-24 | 杭州魔点科技有限公司 | Personnel access management method and system based on face recognition |
CN113553928A (en) * | 2021-07-13 | 2021-10-26 | 厦门瑞为信息技术有限公司 | Human face living body detection method and system and computer equipment |
CN113553928B (en) * | 2021-07-13 | 2024-03-22 | 厦门瑞为信息技术有限公司 | Human face living body detection method, system and computer equipment |
CN113642428B (en) * | 2021-07-29 | 2022-09-27 | 北京百度网讯科技有限公司 | Face living body detection method and device, electronic equipment and storage medium |
CN113642428A (en) * | 2021-07-29 | 2021-11-12 | 北京百度网讯科技有限公司 | Face living body detection method and device, electronic equipment and storage medium |
CN114564100A (en) * | 2021-11-05 | 2022-05-31 | 南京大学 | Free stereoscopic display hand-eye interaction method based on infrared guidance |
CN114564100B (en) * | 2021-11-05 | 2023-12-12 | 南京大学 | Infrared guiding-based hand-eye interaction method for auto-stereoscopic display |
CN114333123A (en) * | 2021-12-13 | 2022-04-12 | 南京熊猫电子股份有限公司 | Gate passage detection method, device and medium based on laser ranging element group |
CN116110111B (en) * | 2023-03-23 | 2023-09-08 | 平安银行股份有限公司 | Face recognition method, electronic equipment and storage medium |
CN116110111A (en) * | 2023-03-23 | 2023-05-12 | 平安银行股份有限公司 | Face recognition method, electronic equipment and storage medium |
CN116580445B (en) * | 2023-07-14 | 2024-01-09 | 江西脑控科技有限公司 | Large language model face feature analysis method, system and electronic equipment |
CN116580445A (en) * | 2023-07-14 | 2023-08-11 | 江西脑控科技有限公司 | Large language model face feature analysis method, system and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111680588A (en) | Human face gate living body detection method based on visible light and infrared light | |
CN106372601B (en) | Living body detection method and device based on infrared visible binocular images | |
CN105574518B (en) | Method and device for detecting living human face | |
CN105740779B (en) | Method and device for detecting living human face | |
CN105740778B (en) | Improved three-dimensional human face in-vivo detection method and device | |
WO2015149534A1 (en) | Gabor binary pattern-based face recognition method and device | |
CN105740781B (en) | Three-dimensional human face living body detection method and device | |
EP3905104B1 (en) | Living body detection method and device | |
CN106570491A (en) | Robot intelligent interaction method and intelligent robot | |
CN110705357A (en) | Face recognition method and face recognition device | |
KR101436290B1 (en) | Detection of fraud for access control system of biometric type | |
CN105913013A (en) | Binocular vision face recognition algorithm | |
CN102324134A (en) | Valuable document identification method and device | |
WO2008151471A1 (en) | A robust precise eye positioning method in complicated background image | |
CN106845410B (en) | Flame identification method based on deep learning model | |
CN111914761A (en) | Thermal infrared face recognition method and system | |
CN109389018B (en) | Face angle recognition method, device and equipment | |
CN110909634A (en) | Visible light and double infrared combined rapid in vivo detection method | |
CN111767879A (en) | Living body detection method | |
CN112434647A (en) | Human face living body detection method | |
CN113255608A (en) | Multi-camera face recognition positioning method based on CNN classification | |
CN112633217A (en) | Human face recognition living body detection method for calculating sight direction based on three-dimensional eyeball model | |
CN112801038B (en) | Multi-view face in-vivo detection method and system | |
CN111881841B (en) | Face detection and recognition method based on binocular vision | |
CN106156739A (en) | A kind of certificate photo ear detection analyzed based on face mask and extracting method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |