CN109376608A - A kind of human face in-vivo detection method - Google Patents
A kind of human face in-vivo detection method Download PDFInfo
- Publication number
- CN109376608A CN109376608A CN201811127323.1A CN201811127323A CN109376608A CN 109376608 A CN109376608 A CN 109376608A CN 201811127323 A CN201811127323 A CN 201811127323A CN 109376608 A CN109376608 A CN 109376608A
- Authority
- CN
- China
- Prior art keywords
- frame
- face
- point
- detection
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of human face in-vivo detection methods, and its object is to the safety and reliability of recognition of face is promoted using In vivo detection.Realize that face In vivo detection, key problem in technology are that (1) realizes the extraction to the detection of face and to 13 key points using MTCNN method using image processing techniques;(2) realize that the movement of human eye blink discriminates whether as living body using the key point of extraction;(3) living body is judged whether it is by the detection to photo frame, video frame and mobile phone frame;(4) comprehensive human eye blink movement differentiates and carries out face In vivo detection to the detection of photo frame, video frame and mobile phone frame.Deep learning method of the present invention can be good at taking precautions against the attack of photo and video, greatly improve the accuracy and robustness of In vivo detection.
Description
Technical field
The invention belongs to field of image processing more particularly to a kind of human face in-vivo detection methods.
Background technique
With the arrival of big data era and the development of deep learning, face recognition technology achieves important breakthrough.Face
Identification has been widely used in the fields such as public security protection, bank finance, police criminal detection, social media, especially recently will
Face recognition technology is applied to mobile phone screen unlock.However, the personation means based on recognition of face defect also emerge one after another, wherein
Mainly using human face photo and video image as pretending to be means.The safety of recognition of face is improved, reliability utilizes In vivo detection
Technology, which is effectively taken precautions against, pretends to be means, this is a problem to be solved.
Summary of the invention
It is an object of the invention to improve reliability of the In vivo detection technology for face identification system, a kind of people is proposed
Face biopsy method.
The technical solution adopted by the present invention is that:
A kind of human face in-vivo detection method, comprising the following steps:
Step 1: Face datection and crucial point location:
It is described using MTCNN open source algorithm of target detection training face detection model and 13 key point positions of prediction
MTCNN algorithm is the algorithm of target detection based on deep learning, utilizes trained model inspection face frame location information (χf,yf,
wf,hf), export 13 key point location information ((x1, y1), (x2, y2)…(x13, y13));
Step 2: the In vivo detection based on blink movement:
The 1st point in 13 key points obtained by step 1 is the left eye angle point of left eye, and the 2nd point is left eye upper eyelid
Point, the 3rd point be left eye right eye angle point, the 4th point be left eye angle lower eyelid key point, the 5th point be right eye left eye angle point, the 6th
Point is the point of right eye upper eyelid, the 7th point of the right canthus point for right eye, the 8th point of key point for right eye angle lower eyelid.Utilize Europe
It is high that family name's distance calculates the i-th frame human eye in videoWith wide wiRatio ui, record uiValue, uiIt is presented on the threshold value T of setting
It floats up and down, calculates the eye opening total degree q in a period of time t and the number k of eye closing, when | q+k | >=2, | q-k |≤a, such as a
3 are taken, these conditions are met, then judgement has blink movement, and the face detected is living body.
Judge the condition that blink movement occurs: the ratio u in the i-th+m framei+m>T>ui+m+1And then there is u in r framei+m+r<T<
ui+m+r+1Or the u in the i-th+r framei+r<T<ui+r+1And then there is u in m framei+r+m>T>ui+r+m+1, wherein m is after opening eyes to eye closing
Frame number in preceding time interval, r are the frame number of the time interval to before opening eyes after closing one's eyes;
Wherein wiWithCalculation formula are as follows:
The value step of the threshold value T are as follows:
1. statistical data concentrates the width and height of n picture human eye, the high mean value of width is soughtWith
2. calculating threshold value T:
Step 3: the In vivo detection based on frame detection:
Convolutional Neural net is detected using the frame of one detection photo frame of deep learning training, video frame and mobile phone frame
Network model detects that face frame location information is (χ in the i-th frame of videofi,yfi,wfi,hfi), if frame detection model detects
There is frame, location information is (χbi,ybi,wbi,hbi), as long as face frame is included in the frame that model inspection arrives, it is not considered as
It is living body, conversely, being then considered living body;
Judge that face frame includes the formula detected in frame are as follows:
χfi>χbi;yfi>ybi (7)
χfi+wfi<χbi+wbi (8)
yfi+hfi<ybi+hbi (9)
(3a) is trained to be prepared with test data: by the picture of camera shooting printing, record the mobile video played with
And the video comprising photo is recorded, N picture is amounted to, the location information of frame is marked, callout box is slightly larger than frame;
(3b) to image data carry out data enhancing: by N picture by translation, rotation image (0 °, 90 °, 180 °,
270 °) increase data, so that the data is become original 16 times;Data set is divided into training set, verifying collection according to the ratio of 8:1:1
And test set;
(3c) handles image data: scaling the images to the size of M × M, reconvert is at gray scale picture, processing
Good image data saves as HDF5 (Hierarchical Data Format) file;
(3d) network structure: the convolutional neural networks layer include 3 convolutional layers, 1 BN layer with 2 full articulamentums,
Its output layer number of nodes 4 selectes loss function
(3f) training: the data handled are sent into convolutional neural networks, network inputs batch size (batch size)
64 are set as, the connection weight W and biasing b of each layer of random initializtion give learning rate η;
(3e) test: training frame detection model for the input of test set data, detect the presence of side, if so, output side
Frame location information (χb,yb,wb,hb);
Step 4: in conjunction with the In vivo detection of frame detection and human eye blink judgement:
Joint In vivo detection is carried out with the two methods described in step 3 as step 2.First to camera acquisition video into
Row Face datection begins to carry out frame detection if detecting face, frame is had in video if detected, and frame
The interior face frame comprising detecting, then do not allow it to pass through, it is believed that he is non-living body;The frame that if frame is not detected or detects
It is interior not include face, then by whether having blink to judge human eye in video, if there is blink, it is judged as living body, otherwise
It is judged as non-living body.
Detailed description of the invention
Below in conjunction with attached drawing, a specific embodiment of the invention is described in further detail.
Fig. 1 is that the movement of human eye blink described in a kind of human face in-vivo detection method of the invention judges schematic diagram;
Fig. 2 is that photo described in a kind of human face in-vivo detection method of the invention, video and mobile phone frame detect network;
Fig. 3 is a kind of In vivo detection broad flow diagram of the invention.
Specific embodiment
The invention discloses a kind of human face in-vivo detection methods, do with reference to the accompanying drawing to a specific embodiment of the invention
Detailed description.
Step 1: Face datection and crucial point location:
It is described using MTCNN open source algorithm of target detection training face detection model and 13 key point positions of prediction
MTCNN algorithm is the algorithm of target detection based on deep learning, utilizes trained model inspection face frame location information (χf,yf,
wf,hf), export 13 key point location information ((x1, y1), (x2, y2)…(x13, y13));
Step 2: the In vivo detection based on blink movement:
The 1st point in 13 key points obtained by step 1 is the left eye angle point of left eye, and the 2nd point is left eye upper eyelid
Point, the 3rd point be left eye right eye angle point, the 4th point be left eye angle lower eyelid key point, the 5th point be right eye left eye angle point, the 6th
Point is the point of right eye upper eyelid, the 7th point of the right canthus point for right eye, the 8th point of key point for right eye angle lower eyelid.Utilize Europe
It is high that family name's distance calculates the i-th frame human eye in videoWith wide wiRatio ui, record uiValue, uiIt is presented on the threshold value T of setting
It floats up and down, calculates the eye opening total degree q in a period of time t and the number k of eye closing, when | q+k | >=2, | q-k |≤a, such as a
3 are taken, these conditions are met, then judgement has blink movement, and the face detected is living body.
Judge the condition that blink movement occurs: the ratio u in the i-th+m framei+m>T>ui+m+1And then there is u in r framei+m+r<T<
ui+m+r+1Or the u in the i-th+r framei+r<T<ui+r+1And then there is u in m framei+r+m>T>ui+r+m+1, wherein m is after opening eyes to eye closing
Frame number in preceding time interval, r are the frame number of the time interval to before opening eyes after closing one's eyes;
Wherein wiWithCalculation formula are as follows:
The value step of the threshold value T are as follows:
1. statistical data concentrates the width and height of n picture human eye, the high mean value of width is soughtWith
2. calculating threshold value T:
Step 3: the In vivo detection based on frame detection:
Convolutional Neural net is detected using the frame of one detection photo frame of deep learning training, video frame and mobile phone frame
Network model detects that face frame location information is (χ in the i-th frame of videofi,yfi,wfi,hfi), if frame detection model detects
There is frame, location information is (χbi,ybi,wbi,hbi), as long as face frame is included in the frame that model inspection arrives, it is not considered as
It is living body, conversely, being then considered living body;
Judge that face frame includes the formula detected in frame are as follows:
χfi>χbi;yfi>ybi (7)
χfi+wfi<χbi+wbi (8)
yfi+hfi<ybi+hbi (9)
(3a) is trained to be prepared with test data: by the picture of camera shooting printing, record the mobile video played with
And the video comprising photo is recorded, N picture is amounted to, the location information of frame is marked, callout box is slightly larger than frame;
(3b) to image data carry out data enhancing: by N picture by translation, rotation image (0 °, 90 °, 180 °,
270 °) increase data, so that the data is become original 16 times;Data set is divided into training set, verifying collection according to the ratio of 8:1:1
And test set;
(3c) handles image data: scaling the images to the size of M × M, reconvert is at gray scale picture, processing
Good image data saves as HDF5 (Hierarchical Data Format) file;
(3d) network structure: the convolutional neural networks layer include 3 convolutional layers, 1 BN layer with 2 full articulamentums,
Its output layer number of nodes 4 selectes loss function
(3f) training: the data handled are sent into convolutional neural networks, network inputs batch size (batch size)
64 are set as, the connection weight W and biasing b of each layer of random initializtion give learning rate η;
(3e) test: training frame detection model for the input of test set data, detect the presence of side, if so, output side
Frame location information (χb,yb,wb,hb);
Step 4: in conjunction with the In vivo detection of frame detection and human eye blink judgement:
Joint In vivo detection is carried out with the two methods described in step 3 as step 2.First to camera acquisition video into
Row Face datection begins to carry out frame detection if detecting face, frame is had in video if detected, and frame
The interior face frame comprising detecting, then do not allow it to pass through, it is believed that he is non-living body;The frame that if frame is not detected or detects
It is interior not include face, then by whether having blink to judge human eye in video, if there is blink, it is judged as living body, otherwise
It is judged as non-living body.
Claims (5)
1. a kind of human face in-vivo detection method, it is characterised in that include the following steps:
Step 1: Face datection and crucial point location;
Step 2: the In vivo detection based on blink movement;
Step 3: the In vivo detection based on frame detection;
Rapid 4: in conjunction with the In vivo detection of frame detection and human eye blink judgement.
2. human face in-vivo detection method as described in claim 1, it is characterised in that:
The step 1 is specific as follows:
It is calculated using MTCNN open source algorithm of target detection training face detection model and 13 key point positions of prediction, the MTCNN
Method is the algorithm of target detection based on deep learning, utilizes trained model inspection face frame location information (χf,yf,wf,hf),
Export 13 key point location information ((x1, y1), (x2, y2)…(x13, y13))。
3. human face in-vivo detection method as claimed in claim 2, it is characterised in that:
The step 2 is specific as follows:
Left eye angle point of the 1st point in 13 key points obtained by step 1 for left eye, the 2nd point of point for left eye upper eyelid, the
3 points are left eye right eye angle point, and the 4th point of key point for left eye angle lower eyelid, the 5th point of left eye angle point for right eye, the 6th point is the right side
The point of eye upper eyelid, the 7th point of the right canthus point for right eye, the 8th point of key point for right eye angle lower eyelid;Utilize Euclidean distance
Calculate the i-th high h of frame human eye in videoiWith wide wiRatio ui, record uiValue, uiIt is floated downward on being presented on the threshold value T of setting
It is dynamic, the eye opening total degree q in a period of time t and the number k of eye closing are calculated, when | q+k | >=2, | q-k |≤a, such as a take 3, full
These conditions of foot, then judgement has blink movement, and the face detected is living body;
Judge the condition that blink movement occurs: the ratio u in the i-th+m framei+m>T>ui+m+1And then there is u in r framei+m+r<T<
ui+m+r+1Or the u in the i-th+r framei+r<T<ui+r+1And then there is u in m framei+r+m>T>ui+r+m+1, wherein m is after opening eyes to eye closing
Frame number in preceding time interval, r are the frame number of the time interval to before opening eyes after closing one's eyes;
Wherein wiAnd hiCalculation formula are as follows:
The value step of the threshold value T are as follows:
1. statistical data concentrates the width and height of n picture human eye, the high mean value of width is soughtWith
2. calculating threshold value T:
。
4. human face in-vivo detection method as claimed in claim 3, it is characterised in that:
Step 3 is specific as follows:
Convolutional neural networks mould is detected using the frame of one detection photo frame of deep learning training, video frame and mobile phone frame
Type detects that face frame location information is (χ in the i-th frame of videofi,yfi,wfi,hfi), if frame detection model has detected side
Frame, location information are (χbi,ybi,wbi,hbi), as long as face frame is included in the frame that model inspection arrives, it is not considered as living
Body, conversely, being then considered living body;
Judge that face frame includes the formula detected in frame are as follows:
χfi>χbi;yfi>ybi (7)
χfi+wfi<χbi+wbi (8)
yfi+hfi<ybi+hbi (9)
(3a) is trained to be prepared with test data: by the picture of camera shooting printing, recording the mobile video and record that play
System includes the video of photo, amounts to N picture, marks the location information of frame, and callout box is slightly larger than frame;
(3b) carries out data enhancing to image data: N picture is come by translation, (0 °, 90 °, 180 °, 270 °) of image of rotation
Increase data, data is made to become original 16 times;By data set in looking for the ratio of 8:1:1 to be divided into training set, verifying collects and test
Collection;
(3c) handles image data: scaling the images to the size of M × M, reconvert is at gray scale picture, what is handled well
Image data saves as HDF5 (Hierarchical Data Format) file;
(3d) network structure: the convolutional neural networks layer include 3 convolutional layers, 1 BN layer with 2 full articulamentums, it
Output layer number of nodes 4, select loss function
(3f) training: the data handled are sent into convolutional neural networks, network inputs batch size (batch size) setting
It is 64, the connection weight W and biasing b of each layer of random initializtion give learning rate η;
(3e) test: training frame detection model for the input of test set data, detect the presence of side, if so, output frame position
Confidence ceases (χb,yb,wb,hb)。
5. human face in-vivo detection method as claimed in claim 4, it is characterised in that:
The step 4 is specific as follows:
Joint In vivo detection is carried out with the two methods described in step 3 as step 2: people being carried out to the video of camera acquisition first
Face detection begins to carry out frame detection if detecting face, if detected with frame in video, and wraps in frame
Containing the face frame detected, then it is not allowed to pass through, it is believed that he is non-living body;If frame is not detected or the frame that detects in not
Comprising face, then by whether having blink to judge human eye in video, if there is blink, it is judged as living body, otherwise judges
For non-living body.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811127323.1A CN109376608B (en) | 2018-09-26 | 2018-09-26 | Human face living body detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811127323.1A CN109376608B (en) | 2018-09-26 | 2018-09-26 | Human face living body detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109376608A true CN109376608A (en) | 2019-02-22 |
CN109376608B CN109376608B (en) | 2021-04-27 |
Family
ID=65402104
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811127323.1A Active CN109376608B (en) | 2018-09-26 | 2018-09-26 | Human face living body detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109376608B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110059624A (en) * | 2019-04-18 | 2019-07-26 | 北京字节跳动网络技术有限公司 | Method and apparatus for detecting living body |
CN110069983A (en) * | 2019-03-08 | 2019-07-30 | 深圳神目信息技术有限公司 | Vivo identification method, device, terminal and readable medium based on display medium |
CN110349310A (en) * | 2019-07-03 | 2019-10-18 | 源创客控股集团有限公司 | A kind of making prompting cloud platform service system for garden enterprise |
CN110399780A (en) * | 2019-04-26 | 2019-11-01 | 努比亚技术有限公司 | A kind of method for detecting human face, device and computer readable storage medium |
CN110717430A (en) * | 2019-09-27 | 2020-01-21 | 聚时科技(上海)有限公司 | Long object identification method and identification system based on target detection and RNN |
CN111241949A (en) * | 2020-01-03 | 2020-06-05 | 中科智云科技有限公司 | Image recognition method and device, electronic equipment and readable storage medium |
CN111539389A (en) * | 2020-06-22 | 2020-08-14 | 腾讯科技(深圳)有限公司 | Face anti-counterfeiting recognition method, device, equipment and storage medium |
CN111666835A (en) * | 2020-05-20 | 2020-09-15 | 广东志远科技有限公司 | Face living body detection method and device |
CN112200086A (en) * | 2020-10-10 | 2021-01-08 | 深圳市华付信息技术有限公司 | Face living body detection method based on video |
CN112560553A (en) * | 2019-09-25 | 2021-03-26 | 北京中关村科金技术有限公司 | Living body detection method, living body detection device, and storage medium |
CN112560819A (en) * | 2021-02-22 | 2021-03-26 | 北京远鉴信息技术有限公司 | User identity verification method and device, electronic equipment and storage medium |
CN112861586A (en) * | 2019-11-27 | 2021-05-28 | 马上消费金融股份有限公司 | Living body detection, image classification and model training method, device, equipment and medium |
CN112949365A (en) * | 2019-12-10 | 2021-06-11 | 纬创资通股份有限公司 | Living face identification system and method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160140390A1 (en) * | 2014-11-13 | 2016-05-19 | Intel Corporation | Liveness detection using progressive eyelid tracking |
US20160171324A1 (en) * | 2013-10-14 | 2016-06-16 | Mircea Ionita | Methods and systems for determining user liveness |
CN107239735A (en) * | 2017-04-24 | 2017-10-10 | 复旦大学 | A kind of biopsy method and system based on video analysis |
CN107609463A (en) * | 2017-07-20 | 2018-01-19 | 百度在线网络技术(北京)有限公司 | Biopsy method, device, equipment and storage medium |
CN107609494A (en) * | 2017-08-31 | 2018-01-19 | 北京飞搜科技有限公司 | A kind of human face in-vivo detection method and system based on silent formula |
CN108140123A (en) * | 2017-12-29 | 2018-06-08 | 深圳前海达闼云端智能科技有限公司 | Face living body detection method, electronic device and computer program product |
-
2018
- 2018-09-26 CN CN201811127323.1A patent/CN109376608B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160171324A1 (en) * | 2013-10-14 | 2016-06-16 | Mircea Ionita | Methods and systems for determining user liveness |
US20160140390A1 (en) * | 2014-11-13 | 2016-05-19 | Intel Corporation | Liveness detection using progressive eyelid tracking |
CN107239735A (en) * | 2017-04-24 | 2017-10-10 | 复旦大学 | A kind of biopsy method and system based on video analysis |
CN107609463A (en) * | 2017-07-20 | 2018-01-19 | 百度在线网络技术(北京)有限公司 | Biopsy method, device, equipment and storage medium |
CN107609494A (en) * | 2017-08-31 | 2018-01-19 | 北京飞搜科技有限公司 | A kind of human face in-vivo detection method and system based on silent formula |
CN108140123A (en) * | 2017-12-29 | 2018-06-08 | 深圳前海达闼云端智能科技有限公司 | Face living body detection method, electronic device and computer program product |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110069983A (en) * | 2019-03-08 | 2019-07-30 | 深圳神目信息技术有限公司 | Vivo identification method, device, terminal and readable medium based on display medium |
CN110059624A (en) * | 2019-04-18 | 2019-07-26 | 北京字节跳动网络技术有限公司 | Method and apparatus for detecting living body |
CN110399780A (en) * | 2019-04-26 | 2019-11-01 | 努比亚技术有限公司 | A kind of method for detecting human face, device and computer readable storage medium |
CN110399780B (en) * | 2019-04-26 | 2023-09-29 | 努比亚技术有限公司 | Face detection method and device and computer readable storage medium |
CN110349310A (en) * | 2019-07-03 | 2019-10-18 | 源创客控股集团有限公司 | A kind of making prompting cloud platform service system for garden enterprise |
CN112560553A (en) * | 2019-09-25 | 2021-03-26 | 北京中关村科金技术有限公司 | Living body detection method, living body detection device, and storage medium |
CN110717430A (en) * | 2019-09-27 | 2020-01-21 | 聚时科技(上海)有限公司 | Long object identification method and identification system based on target detection and RNN |
CN112861586A (en) * | 2019-11-27 | 2021-05-28 | 马上消费金融股份有限公司 | Living body detection, image classification and model training method, device, equipment and medium |
CN112949365A (en) * | 2019-12-10 | 2021-06-11 | 纬创资通股份有限公司 | Living face identification system and method |
TWI731503B (en) * | 2019-12-10 | 2021-06-21 | 緯創資通股份有限公司 | Live facial recognition system and method |
US11315360B2 (en) | 2019-12-10 | 2022-04-26 | Wistron Corporation | Live facial recognition system and method |
CN111241949B (en) * | 2020-01-03 | 2021-12-14 | 中科智云科技有限公司 | Image recognition method and device, electronic equipment and readable storage medium |
CN111241949A (en) * | 2020-01-03 | 2020-06-05 | 中科智云科技有限公司 | Image recognition method and device, electronic equipment and readable storage medium |
CN111666835A (en) * | 2020-05-20 | 2020-09-15 | 广东志远科技有限公司 | Face living body detection method and device |
CN111539389B (en) * | 2020-06-22 | 2020-10-27 | 腾讯科技(深圳)有限公司 | Face anti-counterfeiting recognition method, device, equipment and storage medium |
CN111539389A (en) * | 2020-06-22 | 2020-08-14 | 腾讯科技(深圳)有限公司 | Face anti-counterfeiting recognition method, device, equipment and storage medium |
CN112200086A (en) * | 2020-10-10 | 2021-01-08 | 深圳市华付信息技术有限公司 | Face living body detection method based on video |
CN112560819A (en) * | 2021-02-22 | 2021-03-26 | 北京远鉴信息技术有限公司 | User identity verification method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109376608B (en) | 2021-04-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109376608A (en) | A kind of human face in-vivo detection method | |
CN109583342B (en) | Human face living body detection method based on transfer learning | |
CN108182409B (en) | Living body detection method, living body detection device, living body detection equipment and storage medium | |
CN104091176B (en) | Portrait comparison application technology in video | |
KR100831122B1 (en) | Face authentication apparatus, face authentication method, and entrance and exit management apparatus | |
CN109255322A (en) | A kind of human face in-vivo detection method and device | |
CN110472519B (en) | Human face in-vivo detection method based on multiple models | |
CN108549854A (en) | A kind of human face in-vivo detection method | |
CN109190475B (en) | Face recognition network and pedestrian re-recognition network collaborative training method | |
CN108615226A (en) | A kind of image defogging method fighting network based on production | |
CN105243376A (en) | Living body detection method and device | |
CN111563557A (en) | Method for detecting target in power cable tunnel | |
CN113536972B (en) | Self-supervision cross-domain crowd counting method based on target domain pseudo label | |
CN107230267A (en) | Intelligence In Baogang Kindergarten based on face recognition algorithms is registered method | |
CN111209820B (en) | Face living body detection method, system, equipment and readable storage medium | |
CN103605971A (en) | Method and device for capturing face images | |
CN109711309B (en) | Method for automatically identifying whether portrait picture is eye-closed | |
CN110674680B (en) | Living body identification method, living body identification device and storage medium | |
CN105138967B (en) | Biopsy method and device based on human eye area active state | |
WO2019153175A1 (en) | Machine learning-based occluded face recognition system and method, and storage medium | |
CN113313037A (en) | Method for detecting video abnormity of generation countermeasure network based on self-attention mechanism | |
CN110287829A (en) | A kind of video face identification method of combination depth Q study and attention model | |
CN113378649A (en) | Identity, position and action recognition method, system, electronic equipment and storage medium | |
CN107944395A (en) | A kind of method and system based on neutral net verification testimony of a witness unification | |
CN108108651B (en) | Method and system for detecting driver non-attentive driving based on video face analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |