CN108009531B - Multi-strategy anti-fraud face recognition method - Google Patents

Multi-strategy anti-fraud face recognition method Download PDF

Info

Publication number
CN108009531B
CN108009531B CN201711452306.0A CN201711452306A CN108009531B CN 108009531 B CN108009531 B CN 108009531B CN 201711452306 A CN201711452306 A CN 201711452306A CN 108009531 B CN108009531 B CN 108009531B
Authority
CN
China
Prior art keywords
face
model
image
face recognition
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711452306.0A
Other languages
Chinese (zh)
Other versions
CN108009531A (en
Inventor
王素玉
于晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201711452306.0A priority Critical patent/CN108009531B/en
Publication of CN108009531A publication Critical patent/CN108009531A/en
Application granted granted Critical
Publication of CN108009531B publication Critical patent/CN108009531B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a multi-strategy anti-fraud face recognition method, which comprises the following steps: step 1, in the process of face detection, detecting whether bright pupils generated by near-infrared equipment exist at the positions of pupils of eyes, detecting whether variance meets a threshold value in a face skin color gray scale interval at the position of a face area, and judging whether fraudulent behavior attack exists; and 2, according to the dynamic property of the face recognition process, in the face recognition verification process, detecting the micro expression based on Candide-3 model 3D face reconstruction, comparing the similarity between the face features of adjacent 3 frames and the 3D model, and setting a reasonable threshold to avoid excessively similar faces.

Description

Multi-strategy anti-fraud face recognition method
Technical Field
The invention belongs to the field of biological feature recognition, and particularly relates to a multi-strategy anti-fraud face recognition method.
Background
At present, the face recognition technology is widely applied, the main aim of the face recognition technology is to recognize the identity of a face image, however, the face recognition process faces multiple fraud means, so that the identification of the authenticity of an input face becomes a problem to be solved urgently in the face recognition technology. Face recognition technology is mainly faced with three fraud approaches: the system comprises face pictures of legal users, face videos of the legal users and 3D models of the legal users. The traditional living body detection mode is usually aimed at a single fraud means, and a counterfeit attack is easy to grasp a short board for attack. Or a mode of action verification by the cooperation of users is adopted, and the mode has poor user experience and general resistance. Only by using a multi-level strategy, the complicated multi-face spoofing attack can be effectively dealt with.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a multi-strategy anti-fraud face recognition method, which effectively prevents the fraud attacks of pictures, electronic equipment images and videos, fully utilizes the barrel principle to consider the diversity of fraud behaviors, adopts multi-level strategies and considers the real-time property of the face recognition process.
Specifically, various fraud means faced by a plurality of links in the face recognition process are comprehensively considered, and in the processes of face registration, face detection and face recognition, under the condition that a user does not need action verification, a common user is prevented from using face photos of a legal user or playing pictures and videos by using electronic equipment to perform fraud behaviors. Because the near-infrared camera is used as the video stream acquisition equipment, the fraud attack of the electronic screen can be avoided by utilizing the characteristics of the near-infrared camera, the bright pupil effect generated by the infrared lamp at the pupil is also an effective fraud attack means for avoiding the face picture, and the near-infrared camera can normally work in the environment without illumination and with low visibility. In the human face registration link, a residual error-based convolutional neural network Resnet model is used for extracting human face feature 128-dimensional vectors and storing the vectors into a human face feature library. In the human face detection link, under the near-infrared image, a double-channel design of a bright pupil effect and a skin color model is adopted. And in the face recognition stage, the feature singleness and the dynamics in the process of fraud attack are fully utilized so as to intercept the fraudulent face recognition.
The invention adopts the following technical scheme:
the method comprises the following steps: performing face detection on the obtained face image, performing face key point detection by using a GBDT algorithm in the process of detecting the near-infrared image face, performing coarse positioning on the eyes of the face, performing fine positioning on the pupils, and detecting whether a bright pupil effect generated by a near-infrared camera lighting structure exists or not; meanwhile, counting the skin color histogram of the face area in the near-infrared image, and judging whether the skin color histogram accords with a real face;
step two: in the process of face recognition, a convolutional neural Resnet model based on residual errors is used for extracting face features in three frames, the Euclidean space distance is calculated, faces which are too close to each other are avoided, and the face features meet the requirement and are added into a face feature queue; using the key points of the face during detection, performing 3D face reconstruction based on a Candide-3 model, extracting the 3D face model in three frames, calculating a goodness-of-fit judgment coefficient of an R square, detecting whether the change of micro expression exists or not, and performing Euclidean distance comparison on the features in the feature queue of face recognition and a face library registered by the face after the conditions are met so as to finish the face recognition.
The face recognition method is suitable for strict face identity authentication, and can prevent a user from using face photos or video data of a legal user to perform fraud authentication in the identity authentication process. According to the barrel principle, the invention uses different modes or multiple strategies for different fraud measures. In order to avoid the image fraud attack of the electronic equipment, the invention adopts a mode of acquiring the video stream by the near infrared image. The near-infrared camera can adapt to different illumination environments, and meanwhile, the near-infrared camera can shield screen imaging of the electronic equipment; is one of the important fraud prevention strategies in the system.
In the first step, the detection of key points of the human face is carried out according to a regression tree method of gradient improvement learning, the positions of the human face are positioned, then the pupil positioning is detected through a Huffman circle, the gray level binarization calculation of the pupil area is carried out, the number of pixels of white spots is counted, whether the bright pupil effect generated by a near-infrared camera lighting structure is detected or not is measured through setting the size threshold value to be 2-10 pixels, and the legal human face photo fraud behavior is prevented. Meanwhile, skin color model gray histogram model matching is carried out on the face area image, variance calculation is carried out in a gray value interval of 75-95, and if the variance is smaller than 18, fraud is judged. Preventing fraudulent attacks of legitimate users using special processing.
In the second step, a residual error-based convolutional neural network Resnet model is used for extracting 128-dimensional vectors of face features during registration aiming at the dynamics of the face recognition process, near-infrared face images and visible light face images are respectively registered, and a near-infrared face library and a visible light face library are established for recognition and retrieval; in the process of face recognition, a convolutional neural network Resnet model based on residual errors is used for extracting face features in three frames, the Euclidean spatial distance is calculated, the threshold value range is 0.8-0.98, the similarity is normal in the range, and the features are added into a feature queue of the face recognition under the condition that the conditions are met. According to the face key point detection in the face detection process, face 3D model reconstruction is carried out based on a Candide-3 model, three-frame face 3D model parameters are extracted, an R-square goodness of fit judgment coefficient is calculated, and whether micro-expression changes exist or not is detected within the threshold range of 0.8-0.98. And finally, comparing the features in the feature queue of the face recognition with a face library registered by the face, calculating the Euclidean distance, setting the threshold value to be more than 0.6, and meeting the condition that the matching is successful.
Therefore, the invention has the following advantages: the method can comprehensively consider the safety of user experience and user identity authentication from the aspect of actual use, the user does not need to perform action expression coordination in the process of identity authentication, identity fraud prevention authentication is performed under the condition that the user is not in contact, various identity fraud behaviors in the whole process of face identity authentication can be effectively prevented, and used hardware equipment has the characteristics of low cost and no influence of illumination, and can be used in all-weather and complex environments.
Drawings
Fig. 1 is an overall flow chart of the method of the present invention.
Detailed Description
The technical scheme of the invention is further specifically described by the following embodiments.
The invention provides a multi-strategy anti-fraud face recognition method which is suitable for a Near Infrared (NIR) camera video acquisition platform. According to the thought of the barrel principle, the safety performance of the system can be improved by using a multi-level fraud prevention means in each link of face recognition, the overall framework is as shown in figure 1, measures are taken aiming at different fraud authentication behaviors, and authentication is rapidly carried out under the condition that a user does not act in a matching manner.
As shown in fig. 1, the present invention provides a multi-policy anti-fraud face recognition method, which includes the following steps:
the method comprises the following steps: multi-face registration by using near-infrared camera to collect images
The face registration can ensure the authenticity of face recognition; the missing rate can be reduced by registering a plurality of face pictures, and the accuracy of face recognition is improved; the adoption of the near infrared facial image registration can reduce the fraudulent behavior faced by the face recognition from the source.
The invention utilizes the imaging characteristics of near infrared: the near-infrared camera can enable the system to adapt to different illumination environments, and meanwhile, the near-infrared camera can effectively prevent fraudulent image attacks of electronic equipment by utilizing the spectral characteristics of NIR near-infrared light, and the near-infrared camera is a first living body detection channel.
The user can pass the verification quickly under the environment of no illumination and low visibility. The method comprises the steps that a face is detected from an image in a near-infrared video stream through a detection window of a sliding window by using a face detection algorithm based on Hog characteristics, the HOG characteristic detection algorithm is used for converting a YUV channel image of the near-infrared image into a gray image, and then the overall brightness of the image is improved or reduced through Gamma square root standardization correction (wherein Gamma is 0.5 as shown in a formula I), so that color space normalization is realized;
Y(x,y)=I(x,y)γ
formula one
Wherein (x, Y) represents the coordinate value of the pixel in the image, I (x, Y) represents the sum of squared differences of gray scales of the original image, and Y (x, Y) represents the processed image
Then, the image after color space normalization is subjected to gradient and gradient direction calculation in the horizontal and vertical directions,
the gradient operator is shown in formula two:
Gx(x,y)=I(x+1,y)-I(x-1,y)
Gy(x,y)=I(x,y+1)-I(x,y-1)
Figure BDA0001528652810000041
Figure BDA0001528652810000042
formula two
Wherein (x, y) represents the coordinate value of the pixel in the image. G (x, y) represents the gray scale gradient of the image, the subscript represents the direction,
and dividing the image into a plurality of units, performing gradient direction histogram statistics, performing feature vector normalization, and judging whether a human face exists or not by using an SVM (support vector machine) classifier for the obtained Hog feature vector.
Step two: and respectively sending the detected face pictures into a bright pupil detection channel and a skin color matching detection channel, which are second living body detection channels.
In the bright pupil detection process, whether the bright pupil effect generated by the near-infrared camera lighting structure exists in the image is detected. And positioning the eye region according to a regression tree method of gradient improvement learning, solving the optimal solution of the loss function by using a steepest descent method in the regression tree method of gradient improvement learning, wherein a CART decision tree is used for fitting a negative gradient, and a Newton method is used for solving the step length. And carrying out regional binarization processing on the eye region image, calculating the size of a white spot in a pupil, and setting a size threshold value to be 2-10 pixels, so as to detect whether a bright pupil effect generated by a near-infrared camera lighting structure exists or not and prevent legal face photo fraud.
The skin color matching detection is to calculate whether the variance in the domain (gray value interval (75,95)) is less than 18 according to the skin color histogram statistics of the face region in the near-infrared image to judge whether legal face photo fraud exists or not, and the special processing legal face fraud is effectively prevented according to the double-channel design of the bright pupil effect and the skin color model.
Step three: when the face image can not satisfy the condition of two channels simultaneously, the process is terminated, the next frame image is detected, otherwise, the ResNet model is used for extracting the identity characteristics, the ResNet architecture model adopted by the invention mainly comprises a residual error module and a measurement learning module, the size of an input image is normalized to be 150 pixels, the residual error module uses a large amount of operation economy type and a convolution kernel with the size of 3 x 3 and a convolution kernel with the size of 1 x 1, as ResNet adopts the module design of the residual error module, namely, an identity map is added, the originally needed function H (x) is converted into F (x) + x, the gradient dissipation problem can be effectively solved, the image is extracted to be represented by the characteristics of 1 x 256 through the convolution with the step length of 2 and the pooling layer, the image is fully connected to the full-connection layer with the length of 128, the measurement learning is carried out by using a triple loss function, and the loss function is expressed by the characteristics of each element in a formula three (wherein f (x) as the function of the characteristics of each element in the triple, + represents a loss when the value in [ ] is greater than zero, and a loss when it is less than zero) as follows:
Figure BDA0001528652810000061
wherein, the function f (x) is the feature expression of each element in the triple, when the value in + represents [ ] is larger than zero, the value is taken as loss, when the value is smaller than zero, the loss is zero, the superscript p represents the same person feature, and n represents the non-same person feature.
The coding length is set to 128 bits, and the feature value of the face image is abstractly represented by a 128-dimensional feature vector. And adding the facial features extracted by the model into a designed facial feature queue.
Aiming at the dynamic characteristics of the face recognition process, namely the face angle is changed in the verification process, the change of the micro expression is generated. And a third living body detection channel adopts a dynamic verification mode. The length of the face feature queue is set to be 3, and the similarity of the face features is measured by calculating two 128-dimensional vector Euclidean distances, as shown in formula IV:
Figure BDA0001528652810000062
wherein d represents the Euclidean distance, and x represents the face feature vector.
Setting the threshold value to be less than 0.98 and greater than 0.8 avoids the problem of overhigh similarity of the human face in the cheating process, and adds the features into a feature queue of the human face recognition when the conditions are met. And if the similarity of the face features in the queue is higher than 0.98, determining that the face is subjected to fraudulent face authentication, emptying the current feature queue and ending the process, and performing an operation process of flowing a frame of image down the near-infrared video.
Meanwhile, the key points detected in the face detection process are used for face 3D shape reconstruction based on the Candide-3 model. The Candide model is a parameterized model, and only little computing time is needed for 3D face rendering by applying the model, so that the Candide model is widely applied to video transmission and video animation. Candide-3 is the 3 rd version of the Candide model, which consists of 113 vertices and 168 faces. The model is drawn by connecting the coordinates of the 113 vertices of the model. The first step of face 3D model reconstruction is to initialize a Candide-3 standard model, and extreme values of the Candide-3 model on X, Y, Z three axes in a three-dimensional coordinate space are recorded as minX, maxX, minY, maxY, minZ and maxZ and serve as the basis of an initial texture mapping coordinate. Any face image is loaded for coordinate mapping, (0, 0) corresponds to (minX, minY, 0) of the space coordinate system, (0, 1) is mapped to (minX, maxY, 0), (1, 0) is mapped to (maxX, minY, 0), and (1, 1) is mapped to (maxX, maxY, 0).
By fine-tuning the coordinates of the key points, each point P (x, y, z) in the model of candidide 3 is projected to the XY plane and corresponds to the corresponding key point of the input face image, so as to generate a 3D model of the face. And (4) measuring whether the micro expression change exists or not by calculating a judgment coefficient R (shown as a formula five) of the 3D model in the queue.
Figure BDA0001528652810000071
Wherein, x, y, z are expressed as coordinates in the face 3D model, and x, y, z with superscript are coordinates of the face 3D model used for comparison.
And if the R side is within the interval of 0.8-0.98, comparing the features in the feature queue extracted from the near-infrared video stream with the features of the near-infrared face library registered by the face, setting the threshold value to be more than 0.6, calculating the Euclidean distance, and meeting the condition that the matching is successful. And simultaneously acquiring a face image by the visible light video stream to extract the face features, comparing the extracted features with the visible light face library features registered by the face, setting the threshold value to be more than 0.6, calculating the Euclidean distance, and meeting the condition matching success. The whole recognition process needs to be completed by passing through the near-infrared face library and the visible light face library simultaneously. And if the imaging of the visible light camera is not good, the near-infrared single-channel verification can be used.
The invention not only carries out anti-fraud design at the source of the collected image, but also adopts a double-channel living body detection strategy in the processes of face detection and dynamic identification to form a multi-level anti-fraud face identification system, thereby effectively improving the safety of the system and preventing fraud attacks from picture images and electronic equipment images. In the aspect of neural network design, the operation economy is fully considered, the advantages of a Near Infrared (NIR) camera are fully exerted, the illumination adaptability is strong, and the purposes of good interactivity and reduction of the system operation cost are achieved.
The invention adds data to perform experimental authentication on the basis of a living body detection data set of Nanjing aerospace university. The test data included 83 multi-perspective, multiple fraud measure negative sample images from 20 users, the specific composition of which is shown in table 10. A positive sample of test data includes 125 passes of a test by 10 users. The experimental results are as follows: the accuracy of the live body detection of the photo reaches 96.38%, the accuracy of the live body detection in the electronic screen cheating attack reaches 100%, and the total accuracy of the face recognition reaches 97.60%.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. For example, the methods described in the specific embodiments may be modified without departing from the basic spirit of the invention. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (1)

1. A multi-strategy anti-fraud face recognition method is characterized by comprising the following steps:
the method comprises the following steps: carrying out face detection on the obtained face image, respectively sending the detected face image into a bright pupil detection channel and a skin color matching detection channel, carrying out face key point detection by using a GBDT algorithm in the process of near-infrared image face detection, carrying out coarse positioning on the eyes of the face, carrying out fine positioning on the pupil, and detecting whether a bright pupil effect generated by a near-infrared camera lighting structure exists or not; meanwhile, counting the skin color histogram of the face area in the near-infrared image, judging whether the skin color histogram accords with a real face, and when the face image cannot meet the conditions of two channels at the same time, terminating the process and entering the detection process of the next frame of image;
step two: in the process of face recognition, a convolutional neural Resnet model based on residual errors is used for extracting face features in three frames, the Euclidean space distance is calculated, faces which are too close to each other are avoided, and the face features meet the requirement and are added into a face feature queue; using the key points of the face during detection, performing 3D face reconstruction based on a Candide-3 model, extracting three frames of face 3D models, calculating an R-square goodness-of-fit judgment coefficient, detecting whether the change of micro expression exists or not, and performing Euclidean distance comparison on the features in the feature queue of face recognition and a face library registered by the face after the conditions are met so as to finish the face recognition;
in the first step, detecting key points of a human face according to a regression tree method of gradient improvement learning, positioning the positions of eyes, detecting pupil positioning through a Huffman circle, performing binary calculation on the gray scale of a pupil region, counting the number of pixels of a white spot, setting a size threshold value to be 2-10 pixels, measuring whether a bright pupil effect generated by a near-infrared camera lighting structure is detected, and judging that a fraud behavior exists if the number of the pixels of the white spot is not in a range of 2-10; meanwhile, carrying out skin color model gray histogram model matching on the face region image, carrying out variance operation in a gray value interval of 75-95, and judging as a fraudulent behavior if the variance is less than 18;
in the second step, a residual error-based convolutional neural network Resnet model is used for extracting face features during registration aiming at the dynamics of the face recognition process, 128-dimensional feature vectors are used for abstracting feature values of face images, near-infrared face images and visible light face images are respectively registered, and a near-infrared face library and a visible light face library are established for recognition and retrieval; the metric learning module in the network adopts a triple loss function, as shown in formula three:
Figure FDA0003387134360000011
Figure FDA0003387134360000021
wherein the (x) function is the feature expression of each element in the triplet, when the + indicates that the value in [ ] is greater than zero, the value is taken as loss, when the value is less than zero, the loss is zero, the superscript p represents the same person feature, and n represents the non-same person feature;
in the process of face recognition, extracting face features in three frames by using a residual error-based convolutional neural network Resnet model, calculating an Euclidean space distance, wherein the range of a threshold value is 0.8-0.98, and the range is a normal condition of similarity, and the features are added into a feature queue of face recognition under the condition that the threshold value is satisfied, otherwise, the current feature queue is emptied and the process is ended, and the operation process of flowing one frame of image in a near-infrared video stream is carried out; according to the face key point detection in the face detection process, face 3D model reconstruction is carried out based on a Candide-3 model, three-frame face 3D model parameters are extracted, and an R-square goodness of fit judgment coefficient is calculated, as shown in a formula V:
Figure FDA0003387134360000022
wherein, x, y and z are expressed as coordinates in the face 3D model, and x, y and z with superscripts are coordinates of the face 3D model used for comparison;
set up R2The threshold range is 0.8 to 0.98, and whether the micro-expression changes or not is detected; if R is2And comparing the features in the feature queue of the face recognition with a face library registered by the face within the range of 0.8 to 0.98, calculating the Euclidean distance, setting the threshold value to be more than 0.6, and meeting the condition that the matching is successful.
CN201711452306.0A 2017-12-28 2017-12-28 Multi-strategy anti-fraud face recognition method Active CN108009531B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711452306.0A CN108009531B (en) 2017-12-28 2017-12-28 Multi-strategy anti-fraud face recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711452306.0A CN108009531B (en) 2017-12-28 2017-12-28 Multi-strategy anti-fraud face recognition method

Publications (2)

Publication Number Publication Date
CN108009531A CN108009531A (en) 2018-05-08
CN108009531B true CN108009531B (en) 2022-01-07

Family

ID=62061901

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711452306.0A Active CN108009531B (en) 2017-12-28 2017-12-28 Multi-strategy anti-fraud face recognition method

Country Status (1)

Country Link
CN (1) CN108009531B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163215B (en) * 2018-06-08 2022-08-23 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer readable medium and electronic equipment
CN109034004A (en) * 2018-07-05 2018-12-18 广东军缆信息技术有限公司 A kind of identity verification method and device based on recognition of face
CN110738072A (en) * 2018-07-18 2020-01-31 浙江宇视科技有限公司 Living body judgment method and device
CN109325413A (en) * 2018-08-17 2019-02-12 深圳市中电数通智慧安全科技股份有限公司 A kind of face identification method, device and terminal
CN109145129B (en) * 2018-09-07 2020-03-31 深圳码隆科技有限公司 Depth measurement learning method and device based on hierarchical triple loss function
CN109961000A (en) * 2018-10-22 2019-07-02 大连艾米移动科技有限公司 A kind of intelligence examination hall anti-cheating system
CN111222380B (en) * 2018-11-27 2023-11-03 杭州海康威视数字技术股份有限公司 Living body detection method and device and recognition model training method thereof
CN109767262B (en) * 2018-12-18 2021-12-31 黎明职业大学 Three-dimensional facial expression recognition scoring evaluation method for advertising screen
CN109711312A (en) * 2018-12-20 2019-05-03 四川领军智能科技有限公司 A kind of testimony of a witness verifying system and method based on silent In vivo detection recognition of face
CN109508706B (en) * 2019-01-04 2020-05-05 江苏正赫通信息科技有限公司 Silence living body detection method based on micro-expression recognition and non-sensory face recognition
CN110175583A (en) * 2019-05-30 2019-08-27 重庆跃途科技有限公司 It is a kind of in the campus universe security monitoring analysis method based on video AI
CN110472519B (en) * 2019-07-24 2021-10-29 杭州晟元数据安全技术股份有限公司 Human face in-vivo detection method based on multiple models
CN113051955B (en) * 2019-12-10 2023-05-30 北京眼神科技有限公司 Living body detection method and device on iris lock, readable storage medium and equipment
CN111079688A (en) * 2019-12-27 2020-04-28 中国电子科技集团公司第十五研究所 Living body detection method based on infrared image in face recognition
CN111125658B (en) * 2019-12-31 2024-03-22 深圳市分期乐网络科技有限公司 Method, apparatus, server and storage medium for identifying fraudulent user
CN111291885B (en) 2020-01-20 2023-06-09 北京百度网讯科技有限公司 Near infrared image generation method, training method and device for generation network
CN111325185B (en) * 2020-03-20 2023-06-23 上海看看智能科技有限公司 Face fraud prevention method and system
CN111476151B (en) * 2020-04-03 2023-02-03 广州市百果园信息技术有限公司 Eyeball detection method, device, equipment and storage medium
CN114297605B (en) * 2021-12-29 2022-11-11 江苏方洋智能科技有限公司 Security authentication method based on security computer

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106397A (en) * 2013-01-19 2013-05-15 华南理工大学 Human face living body detection method based on bright pupil effect
CN106650669A (en) * 2016-12-27 2017-05-10 重庆邮电大学 Face recognition method for identifying counterfeit photo deception
CN107103266A (en) * 2016-02-23 2017-08-29 中国科学院声学研究所 The training of two-dimension human face fraud detection grader and face fraud detection method
CN107506717A (en) * 2017-08-17 2017-12-22 南京东方网信网络科技有限公司 Without the face identification method based on depth conversion study in constraint scene

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100886557B1 (en) * 2007-05-03 2009-03-02 삼성전자주식회사 System and method for face recognition based on adaptive learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106397A (en) * 2013-01-19 2013-05-15 华南理工大学 Human face living body detection method based on bright pupil effect
CN107103266A (en) * 2016-02-23 2017-08-29 中国科学院声学研究所 The training of two-dimension human face fraud detection grader and face fraud detection method
CN106650669A (en) * 2016-12-27 2017-05-10 重庆邮电大学 Face recognition method for identifying counterfeit photo deception
CN107506717A (en) * 2017-08-17 2017-12-22 南京东方网信网络科技有限公司 Without the face identification method based on depth conversion study in constraint scene

Also Published As

Publication number Publication date
CN108009531A (en) 2018-05-08

Similar Documents

Publication Publication Date Title
CN108009531B (en) Multi-strategy anti-fraud face recognition method
CN108038456B (en) Anti-deception method in face recognition system
CN107862299B (en) Living body face detection method based on near-infrared and visible light binocular cameras
CN108764071B (en) Real face detection method and device based on infrared and visible light images
US7715596B2 (en) Method for controlling photographs of people
KR20170006355A (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
WO2011156143A2 (en) Distinguishing live faces from flat surfaces
CN109670430A (en) A kind of face vivo identification method of the multiple Classifiers Combination based on deep learning
CN109086724A (en) A kind of method for detecting human face and storage medium of acceleration
CN107798279A (en) Face living body detection method and device
CN111415339B (en) Image defect detection method for complex texture industrial product
CN105205437B (en) Side face detection method and device based on contouring head verifying
CN109359577A (en) A kind of Complex Background number detection system based on machine learning
JP2020518879A (en) Detection system, detection device and method thereof
CN112784712B (en) Missing child early warning implementation method and device based on real-time monitoring
CN111832405A (en) Face recognition method based on HOG and depth residual error network
CN109325472B (en) Face living body detection method based on depth information
CN107944395A (en) A kind of method and system based on neutral net verification testimony of a witness unification
CN111222433A (en) Automatic face auditing method, system, equipment and readable storage medium
CN112633217A (en) Human face recognition living body detection method for calculating sight direction based on three-dimensional eyeball model
CN110222647A (en) A kind of human face in-vivo detection method based on convolutional neural networks
CN111274851A (en) Living body detection method and device
CN107145820B (en) Binocular positioning method based on HOG characteristics and FAST algorithm
Fathy et al. Benchmarking of pre-processing methods employed in facial image analysis
CN111832464A (en) Living body detection method and device based on near-infrared camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant