CN109711243A - A kind of static three-dimensional human face in-vivo detection method based on deep learning - Google Patents
A kind of static three-dimensional human face in-vivo detection method based on deep learning Download PDFInfo
- Publication number
- CN109711243A CN109711243A CN201811296335.7A CN201811296335A CN109711243A CN 109711243 A CN109711243 A CN 109711243A CN 201811296335 A CN201811296335 A CN 201811296335A CN 109711243 A CN109711243 A CN 109711243A
- Authority
- CN
- China
- Prior art keywords
- depth
- face
- image
- vector
- depth value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 68
- 238000001727 in vivo Methods 0.000 title claims abstract description 56
- 238000013135 deep learning Methods 0.000 title claims abstract description 29
- 230000003068 static effect Effects 0.000 title claims abstract description 21
- 239000013598 vector Substances 0.000 claims abstract description 160
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 48
- 238000000034 method Methods 0.000 claims abstract description 27
- 230000001815 facial effect Effects 0.000 claims description 35
- 238000012545 processing Methods 0.000 claims description 12
- 238000013528 artificial neural network Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 10
- 238000001574 biopsy Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 16
- 238000010801 machine learning Methods 0.000 abstract description 8
- 238000012360 testing method Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 239000000284 extract Substances 0.000 description 5
- 238000010606 normalization Methods 0.000 description 5
- 238000012544 monitoring process Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 210000003128 head Anatomy 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The static three-dimensional human face in-vivo detection method based on deep learning that the invention discloses a kind of shoots depth image by depth camera this method comprises: shooting coloured image by colour imagery shot;When detecting face in the color image, according to the color image and the depth image, the corresponding feature vector of the face is obtained by the first convolutional neural networks and the second convolutional neural networks;According to the corresponding feature vector of the face, judge whether the face is living body by In vivo detection classifier trained in advance.The present invention is only with two cameras, in conjunction with the characteristics of color image and depth image, in conjunction with technologies such as deep learning and machine learning, substantially increases speed, percent of pass and the anti-fake rate of face In vivo detection, and at low cost, and precision is high.
Description
Technical field
The invention belongs to technical field of face recognition, and in particular to a kind of static three-dimensional face living body based on deep learning
Detection method.
Background technique
With the continuous development of face recognition technology, many products begin to use face recognition technology to verify user identity,
Such as ATM in bank, self-service shop even Household door lock.However whether general face recognition technology can not be living body to user
It is effectively detected, therefore malicious person can be legal to pretend to be by printing other people photos or shooting other people videos using mobile phone
User, face identification system of out-tricking realize its malicious intent.Therefore face In vivo detection technology is come into being.
Currently, a kind of face In vivo detection technology is provided in the related technology, which is shot using two cameras
Image obtains 3D human face characteristic point, and training obtains 3D Face datection classifier.The image shot later from third camera
In extract human face region and human eye area, use convolutional neural networks as human eye detection model, pass through and combine three camera shootings
The data of head carry out living body judgement.
The above-mentioned data acquisition 3D human face characteristic point for using two cameras in the related technology, takes a long time, is unable to reach
The purpose of real-time detection, while the precision of human eye detection algorithm is depended on, it not can guarantee efficiency and accuracy.And need three
Camera, while to consider the visual angle and image alignment problem of three cameras, operand is big, at high cost.
Summary of the invention
In order to solve the above problem, the present invention provide a kind of static three-dimensional human face in-vivo detection method based on deep learning,
Device, equipment and computer readable storage medium, only with two cameras, in conjunction with the characteristics of color image and depth image,
In conjunction with technologies such as deep learning and machine learning, speed, percent of pass and the anti-fake rate of face In vivo detection are substantially increased, and at
This is low, and precision is high.The present invention solves problem above by the following aspects.
In a first aspect, the embodiment of the invention provides a kind of static three-dimensional face In vivo detection side based on deep learning
Method, which comprises
It is shot coloured image by colour imagery shot, depth image is shot by depth camera;
When detecting face in the color image, according to the color image and the depth image, pass through
One convolutional neural networks and the second convolutional neural networks obtain the corresponding feature vector of the face;
According to the corresponding feature vector of the face, judge that the face is by In vivo detection classifier trained in advance
No is living body.
With reference to first aspect, the embodiment of the invention provides the first possible implementation of above-mentioned first aspect,
In, it is described according to the color image and the depth image, pass through the first convolutional neural networks and the second convolutional neural networks
Obtain the corresponding feature vector of the face, comprising:
Face cutting and normalized are carried out to the color image, obtain colorized face images;
Face cutting and normalized are carried out to the depth image, obtain depth facial image;
Feature is carried out to the colorized face images and the depth facial image respectively by the first convolutional neural networks
It extracts, obtains the first color vectors and the first depth vector;
Feature is carried out to the colorized face images and the depth facial image by the distribution of the second convolutional neural networks
It extracts, obtains the second color vectors and the second depth vector;
To the second depth described in first color vectors, second color vectors, the first depth vector sum to
Amount is spliced, and the corresponding feature vector of the face is obtained.
The possible implementation of with reference to first aspect the first, the embodiment of the invention provides the of above-mentioned first aspect
Two kinds of possible implementations, wherein it is described that face cutting and normalized are carried out to the depth image, obtain depth people
Face image, comprising:
Preset number face key point is obtained from the color image to set;
Obtain the corresponding key point depth value in each face key point position respectively from the depth image;
The human face region in the depth image is normalized according to each key point depth value, is obtained
Depth facial image.
The possible implementation of second with reference to first aspect, the embodiment of the invention provides the of above-mentioned first aspect
Three kinds of possible implementations, wherein described to obtain each face key point position pair respectively from the depth image
The key point depth value answered, comprising:
Judge whether the depth value at face key point position described in the depth image is 0;
If not, obtaining the depth value as the corresponding key point depth value in face key point position;
If so, the depth value of four consecutive points of face key point position is obtained, according to four consecutive points
Depth value carry out interpolation, obtain the corresponding key point depth value in face key point position.
The possible implementation of second with reference to first aspect, the embodiment of the invention provides the of above-mentioned first aspect
Four kinds of possible implementations, wherein it is described according to each key point depth value to the face area in the depth image
Domain is normalized, and obtains depth facial image, comprising:
Human face region image is cut out from the depth image;
Determine the maximum depth value and minimum depth value in each key point depth value;
From the human face region image, determine that depth value is greater than the first pixel of the maximum depth value, and
Depth value is less than the second pixel of the minimum depth value;
The depth value of first pixel is revised as the maximum depth value, and by the depth of second pixel
Angle value is revised as the minimum depth value;
The depth value of each pixel in the human face region image is subtracted after the maximum depth value again divided by described
Difference between maximum depth value and the minimum depth value obtains depth facial image.
The possible implementation of with reference to first aspect the first, the embodiment of the invention provides the of above-mentioned first aspect
Five kinds of possible implementations, wherein described to first color vectors, second color vectors, first depth
Second depth vector described in vector sum is spliced, and the corresponding feature vector of the face is obtained, comprising:
It calculates the first absolute difference between first color vectors and the first depth vector, calculates described the
The second absolute difference between two color vectors and the second depth vector;
By first color vectors, the first depth vector, first absolute difference, described second it is colored to
Amount, the second depth vector and second absolute difference are spliced into the first splicing vector;
The first splicing vector is divided into two parts, calculates the third absolute difference between described two parts;
Third absolute difference described in the first splicing vector sum is spliced into the corresponding feature vector of the face.
With reference to first aspect, the embodiment of the invention provides the 6th kind of possible implementation of above-mentioned first aspect,
In, it is described according to the corresponding feature vector of the face, judge that the face is by In vivo detection classifier trained in advance
It is no for before living body, further includes:
Biopsy sample is shot by the colour imagery shot and the depth camera, obtains living body characteristics;
Non-living body sample is shot by the colour imagery shot and the depth camera.Obtain non-live body characteristics;
Classifier is detected according to the living body characteristics and the non-live body characteristics training living body.
Second aspect, the embodiment of the invention provides a kind of, and the static three-dimensional face In vivo detection based on deep learning fills
It sets, described device includes:
Shooting module shoots depth image by depth camera for shooting coloured image by colour imagery shot;
Vector obtains module, for when detecting face in the color image, according to the color image and institute
Depth image is stated, obtains the corresponding feature vector of the face by the first convolutional neural networks and the second convolutional neural networks;
Judgment module, for passing through In vivo detection classifier trained in advance according to the corresponding feature vector of the face
Judge whether the face is living body.
The third aspect, the embodiment of the invention provides a kind of, and the static three-dimensional face In vivo detection based on deep learning is set
It is standby, including
One or more processors;
Storage device, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processing
Device realizes method described in above-mentioned first aspect or the various possible implementations of first aspect.
Fourth aspect, the embodiment of the invention provides a kind of computer readable storage mediums, are stored thereon with computer journey
Sequence, the computer program realize above-mentioned first aspect or first aspect various possible implementation institutes when being executed by processor
The method stated.
In embodiments of the present invention, it is shot coloured image by colour imagery shot, depth map is shot by depth camera
Picture;When detecting face in the color image, according to the color image and the depth image, pass through the first convolution
Neural network and the second convolutional neural networks obtain the corresponding feature vector of the face;According to the corresponding feature of the face to
Amount judges whether the face is living body by In vivo detection classifier trained in advance.The present invention only with two cameras,
In conjunction with the characteristics of color image and depth image, in conjunction with technologies such as deep learning and machine learning, face living body is substantially increased
Speed, percent of pass and the anti-fake rate of detection, and it is at low cost, precision is high.
Detailed description of the invention
By reading the following detailed description of the preferred embodiment, various other advantages and benefits are common for this field
Technical staff will become clear.The drawings are only for the purpose of illustrating a preferred embodiment, and is not considered as to the present invention
Limitation.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 shows a kind of static three-dimensional face In vivo detection based on deep learning provided by the embodiment of the present invention 1
The flow diagram of method;
Static three-dimensional face living body inspection Fig. 2 shows another kind provided by the embodiment of the present invention 1 based on deep learning
The flow diagram of survey method;
Fig. 3 shows the structural schematic diagram of the first convolutional neural networks provided by the embodiment of the present invention 1;
Fig. 4 shows the structural schematic diagram of the second convolutional neural networks provided by the embodiment of the present invention 1;
Fig. 5 shows the splicing schematic diagram of feature vector provided by the embodiment of the present invention 1;
Fig. 6 shows another static three-dimensional face living body inspection based on deep learning provided by the embodiment of the present invention 1
The flow diagram of survey method;
Fig. 7 shows a kind of static three-dimensional face In vivo detection based on deep learning provided by the embodiment of the present invention 2
The structural schematic diagram of device.
Specific embodiment
The illustrative embodiments of the disclosure are more fully described below with reference to accompanying drawings.Although showing this public affairs in attached drawing
The illustrative embodiments opened, it being understood, however, that may be realized in various forms the disclosure without the reality that should be illustrated here
The mode of applying is limited.It is to be able to thoroughly understand the disclosure on the contrary, providing these embodiments, and can be by this public affairs
The range opened is fully disclosed to those skilled in the art.
Embodiment 1
Referring to Fig. 1, the embodiment of the present invention provides a kind of static three-dimensional human face in-vivo detection method based on deep learning, should
Method specifically includes the following steps:
Step 101: being shot coloured image by colour imagery shot, depth image is shot by depth camera.
The embodiment of the present invention is only with two cameras, a colour imagery shot, a depth camera.Using the present invention
The product for the face In vivo detection that embodiment provides, such as ATM in bank or Household door lock only need to be equipped with the two cameras i.e.
It can.The scene in monitoring area is shot by colour imagery shot and depth camera, it is corresponding to respectively obtain monitoring area
Color image and depth image.
After shooting obtains color image and depth image, position registration is carried out to color image and depth image first, with
Ensure that same object is identical the location of in color image and depth image.Because color image and depth image are monitoring
The image of synchronization in region, therefore position registration is carried out to color image and depth image, it is ensured that it is identical in two kinds of images
Object is in same position, can be improved the subsequent accuracy handled based on color image and depth image In vivo detection, reduces
Arithmetic eror.
After obtaining color image and depth image, and color image and depth image are carried out position registration, pass through
Human-face detector carries out Face datection to color image, if detecting face in color image, starts to execute step
102.If face is not detected in color image, this step continues through colour imagery shot and depth camera to monitoring
Region is monitored shooting, and step 102 is executed when detecting face in the color image that colour imagery shot is shot.
Step 102: when detecting face in color image, according to color image and depth image, passing through the first volume
Product neural network and the second convolutional neural networks obtain the corresponding feature vector of face.
As shown in Fig. 2, the operation of 1021-1025 obtains as follows when detecting face in color image
The corresponding feature vector of face, comprising:
Step 1021: when detecting face in color image, color image being carried out at face cutting and normalization
Reason, obtains colorized face images.
When detecting face in color image, the human face region in color image is cut, and to cutting
To human face region be normalized, obtain colorized face images.The colored human face figure that will be obtained in the embodiment of the present invention
Picture is sized to predefined size, and colorized face images are such as dimensioned to 96x96.
Step 1022: face cutting and normalized being carried out to depth image, obtain depth facial image.
This step obtains depth facial image especially by the operation of following A1-A3, specifically includes:
A1: preset number face key point is obtained from color image and is set.
Face key point can be nose, eyes, two corners of the mouths or two eyebrows etc..Above-mentioned preset number can be 3 or 5
Deng, the face key point of selection is more, then the precision of subsequent In vivo detection is higher, but choose face key point get over multioperation amount
It is corresponding bigger.In the embodiment of the present invention with preset number be 5, face key point be nose, eyes and two corners of the mouths for carry out
Explanation.I.e. when detecting in color image comprising face, nose, eyes and two corners of the mouths of face are obtained from color image
Totally 5 face key points are set.Face key point position, that is, nose, eyes and two corners of the mouth these face key points are in cromogram
Coordinate as in.
A2: it obtains each face key point respectively from depth image and sets corresponding key point depth value.
Due to having carried out position registration, same face key point to color image and depth image in a step 101
It is identical with the position in depth image in color image.According to each face key point got from color image
It sets, from the key point depth value obtained in depth image from each face key point is set.
Specifically, each face key point is set, judges the depth value in depth image at face key point position
It whether is 0.If not being 0, the depth value for obtaining the face key point place of setting sets corresponding pass as the face key point
Key point depth value.If it is 0, the depth value for four consecutive points that the face key point is set is obtained, according to this four consecutive points
Depth value carry out interpolation, obtain the face key point and set corresponding key point depth value.
Before carrying out interpolation according to the depth value of four consecutive points, judge respectively four consecutive points depth value whether be
0, if the depth value of four consecutive points is not 0, interpolation is carried out to the depth value of this four consecutive points and obtains face key
The corresponding key point depth value in point position.Depth value is 0 consecutive points if it exists, then iteration is to depth value using aforesaid way
0 consecutive points carry out interpolation filling, until the depth value of all consecutive points is not after 0, then pass through the depth of four consecutive points
Value progress interpolation obtains the face key point and sets corresponding key point depth value.Interpolation algorithm used in above-mentioned interpolation can be with
For Newton interpolation method.
Each face key point is set, obtains corresponding key point from depth image respectively all in accordance with aforesaid way
Depth value.
A3: the human face region in depth image is normalized according to each key point depth value, obtains depth
Facial image.
Specifically, human face region image is cut out from depth image.Determine that the maximum in each key point depth value is deep
Angle value and minimum depth value.From human face region image, determine that depth value is greater than the first pixel of maximum depth value, and
Depth value is less than the second pixel of minimum depth value.The depth value of all first pixels is revised as above-mentioned depth capacity
Value, and the depth value of all second pixels is revised as minimum depth value.It, will be current after completing above-mentioned modification operation
The depth value of each pixel subtracts deep divided by maximum depth value and minimum again after above-mentioned maximum depth value in human face region image
Difference between angle value obtains depth facial image.
In embodiments of the present invention, after determining the maximum depth value and minimum depth value in each key point depth value,
The difference between maximum depth value and minimum depth value is calculated, judges whether the difference is less than first threshold, and judge the difference
Whether value is greater than second threshold, if judging, the difference is less than first threshold, or judges that the difference is greater than second threshold, then
Directly determine that the face in color image is non-living body, subsequent return step 101 continues through colour imagery shot and depth camera
Head is monitored shooting to prison domain.If judging, the difference is greater than or equal to first threshold and is less than or equal to second threshold,
Then it can not directly determine whether the face is living body, it is subsequent that depth image is normalized in the manner described above, it obtains
Depth facial image is operated with carrying out subsequent face In vivo detection using depth facial image.
Maximum depth value and minimum are deep in the key point depth value that above-mentioned first threshold obtains for great amount of samples data processing
The minimum limit value of difference between angle value, second threshold are maximum in the obtained key point depth value of great amount of samples data processing
The maximum limit of difference between depth value and minimum depth value.Great amount of samples data are a large amount of face image data.
Obtained depth facial image is sized to predefined size in the embodiment of the present invention, and depth face is set
The size of image is identical as the size for the colorized face images that above-mentioned steps 1021 obtain, such as by the size of depth facial image
It is set as 96x96.
Step 1023: feature being carried out to colorized face images and depth facial image respectively by the first convolutional neural networks
It extracts, obtains the first color vectors and the first depth vector.
As shown in figure 3, the first convolutional neural networks include six convolutional layers of C1, C2, C3, C4, C5 and C6 and S1, S2 two
Full articulamentum.Wherein, the output layer of convolutional layer C1 can export 32 characteristic patterns, and the size of every characteristic pattern is 48x48.Convolution
The output layer of layer C2 can export 64 characteristic patterns, and the size of every characteristic pattern is 24x24.The output layer of convolutional layer C3 can be defeated
64 characteristic patterns out, the size of every characteristic pattern are 16x16.The output layer of convolutional layer C4 can export 128 characteristic patterns, and every
The size of characteristic pattern is 8x8.The output layer of convolutional layer C5 can export 256 characteristic patterns, and the size of every characteristic pattern is 4x4.
The output layer of convolutional layer C6 can export 256 characteristic patterns, and the size of every characteristic pattern is 2x2.Full articulamentum S1 has 256
Node, full articulamentum S2 have 128 nodes.Wherein, symbol@is separator in Fig. 3, and the content representation convolutional layer before@is defeated
The size of characteristic pattern out, the number of the characteristic pattern of the subsequent content representation convolutional layer output of@.
In the embodiment of the present invention, using the output of the last one full articulamentum of the first convolutional neural networks as the first convolution
The processing result of neural network can obtain the vector that dimension is 128 that is, using the output of full articulamentum S2 as processing result.
Other convolutional neural networks similar with the first convolution neural network structure can also be used in practical application.
By colorized face images input the first convolutional neural networks in, the first convolutional neural networks to colorized face images into
Row feature extraction processing, obtains the first color vectors of 128 dimensions.Depth facial image is inputted in the first convolutional neural networks,
First convolutional neural networks carry out feature extraction processing to depth facial image, obtain the first depth vector of 128 dimensions.
Step 1024: feature being carried out to colorized face images and depth facial image respectively by the second convolutional neural networks
It extracts, obtains the second color vectors and the second depth vector.
As shown in figure 4, the second convolutional neural networks include five convolutional layers of D1, D2, D3, D4 and D5.Wherein, convolutional layer D1
Output layer can export 32 characteristic patterns, the size of every characteristic pattern is 46x46.The output layer of convolutional layer D2 can export 64
Characteristic pattern is opened, the size of every characteristic pattern is 21x21.The output layer of convolutional layer D3 can export 64 characteristic patterns, every feature
The size of figure is 8x8.The output layer of convolutional layer D4 can export 128 characteristic patterns, and the size of every characteristic pattern is 3x3.Convolution
The output layer of layer D5 can export 256 characteristic patterns, and the size of every characteristic pattern is 1x1.Wherein, symbol@is to separate in Fig. 4
It accords with, the size of the characteristic pattern of the content representation convolutional layer output before@, the characteristic pattern of the subsequent content representation convolutional layer output of@
Number.
In the embodiment of the present invention, using the output of the 5th convolutional layer of the second convolutional neural networks as the second convolutional Neural
The processing result of network can obtain the vector that dimension is 256 that is, using the output of convolutional layer D5 as processing result.Actually answer
Other convolutional neural networks similar with the second convolution neural network structure can also be used in.
By colorized face images input the second convolutional neural networks in, the second convolutional neural networks to colorized face images into
Row feature extraction processing, obtains the second color vectors of 256 dimensions.Depth facial image is inputted in the second convolutional neural networks,
Second convolutional neural networks carry out feature extraction processing to depth facial image, obtain the second depth vector of 256 dimensions.
Step 1025: the first color vectors, the second color vectors, first depth vector sum the second depth vector are spelled
It connects, obtains the corresponding feature vector of face.
Specifically, the first absolute difference between the first color vectors and the first depth vector is calculated, it is color to calculate second
The second absolute difference between color vector and the second depth vector.Wherein, the first color vectors and the first depth vector are all
The vector of 128 dimensions, the first absolute difference are by the data in the first color vectors and the first each identical dimensional of depth vector
Subtract each other and takes absolute value.Second color vectors and the second depth vector are all the vector of 256 dimensions, the second absolute difference
It is that the data in the second color vectors and the second each identical dimensional of depth vector are subtracted each other and taken absolute value.
By the first color vectors, the first depth vector, the first absolute difference, the second color vectors, the second depth vector
And second absolute difference be spliced into the first splicing vector;First splicing vector is divided into two parts, is calculated between two parts
Third absolute difference;First splicing vector sum third absolute difference is spliced into the corresponding feature vector of face.
As shown in figure 5, using f1visIt indicates the first color vectors, uses f1depthIndicate the first depth vector, be calculated
One absolute difference is Abs (f1vis-f1depth).Use f2visIt indicates the second color vectors, uses f2depthIndicate the second depth to
Amount, the second absolute difference being calculated are Abs (f2vis-f2depth).As shown in figure 5, by the first color vectors f1vis,
One depth vector f 1depth, the first absolute difference Abs (f1vis-f1depth), the second color vectors f2vis, the second depth vector
f2depth, the second absolute difference Abs (f2vis-f2depth) be successively stitched together to obtain the first splicing vector, by the first splicing
Vector is divided into two parts, and such as f1 and f2 two parts in Fig. 5, the dimension of f1 and f2 is identical, by each identical dimensional of f1 and f2
On data subtract each other and take absolute value to obtain third absolute difference Abs (f1-f2), by first splicing vector and third difference it is exhausted
It is stitched together to value Abs (f1-f2), obtains the corresponding feature vector f of face, the dimension of the corresponding feature vector f of face is
1728 dimensions.I.e. the corresponding feature vector of face is by the first color vectors f1vis, the first depth vector f 1depth, the first difference it is exhausted
To value Abs (f1vis-f1depth), the second color vectors f2vis, the second depth vector f 2depth, the second absolute difference Abs
(f2vis-f2depth) and third absolute difference Abs (f1-f2) be successively spliced.
Step 103: according to the corresponding feature vector of face, judging that face is by In vivo detection classifier trained in advance
No is living body.
Through the above steps 102 get the corresponding feature vector of face after, the corresponding feature vector of the face is inputted
In advance in trained In vivo detection classifier SVM (Support Vector Machine, support vector machines), In vivo detection classification
The testing result of device SVM output can indicate that the face is living body or non-living body.
In embodiments of the present invention, In vivo detection is carried out in application human face in-vivo detection method provided in an embodiment of the present invention
Before, the In vivo detection classifier SVM of face In vivo detection is used for by following operation training first, is specifically included:
Biopsy sample is shot by colour imagery shot and depth camera, obtains living body characteristics;By colour imagery shot and
Depth camera shoots non-living body sample.Obtain non-live body characteristics;It is detected according to living body characteristics and non-live body characteristics training living body
Classifier.
Wherein, biopsy sample is a large amount of living body users, and non-living body sample is photo or video of a large amount of people etc..Pass through colour
Camera and depth camera shoot the image of each biopsy sample simultaneously, and same by colour imagery shot and depth camera
When shoot non-living body sample image.According to the image of each biopsy sample of shooting, 102 operation is distinguished through the above steps
Obtain the corresponding living body characteristics of each biopsy sample.And the image of each non-living body sample according to shooting, pass through above-mentioned step
Rapid 102 operation obtains the corresponding non-live body characteristics of each non-living body sample respectively.By obtained a large amount of living body characteristics and non-
It is trained study in living body characteristics input SVM, obtains the In vivo detection classifier SVM for face In vivo detection.
After training obtains above-mentioned In vivo detection classifier SVM, the In vivo detection classifier SVM is carried out by test sample
It tests, includes the image of the certain amount living body user of colour imagery shot and depth camera shooting, Yi Jicai in test sample
Color camera and the certain amount of depth camera shooting include the photo of face or the image of video etc., for each test specimens
This obtains the corresponding feature vector of each test sample all in accordance with the operation of above-mentioned steps 102 respectively, then by each test specimens
In the In vivo detection classifier SVM that this corresponding feature vector input training obtains, the corresponding judgement of each test sample is obtained
As a result.According to the corresponding determination rate of accuracy for determining result and being capable of determining that In vivo detection classifier SVM of each test sample, if
Judging nicety rate is lower than preset value, then expands the quantity of biopsy sample and non-living body sample, in the manner described above further training
In vivo detection classifier SVM.
When the determination rate of accuracy of trained In vivo detection classifier SVM is higher than above-mentioned preset value, by the In vivo detection point
Class device SVM puts into practical application, carries out face In vivo detection and judgement according to the operation of above-mentioned steps 101-103.Such as Fig. 6 institute
Show, color image and depth image are obtained by colour imagery shot and depth camera first, people then is carried out to color image
Face detection, judges whether to detect face, returns continue through colour imagery shot and depth camera acquisition cromogram if not
Picture and depth image.It is set if so, obtaining preset number face key point from color image, obtains face key point
It sets corresponding key point depth value and fills the depth value of missing, then determine the maximum depth value and most in key point depth value
Small depth value calculates the difference between maximum depth value and minimum depth value, judges whether the difference is more than or equal to first threshold
And be less than or equal to second threshold, if it is not, then return continue through colour imagery shot and depth camera obtain color image and
Depth image.If it is, obtaining depth facial image to depth image progress face cutting and normalized, and to coloured silk
Chromatic graph picture carries out face cutting and normalized obtains colorized face images, utilizes the first convolutional neural networks and second later
Convolutional neural networks carry out feature extraction to colorized face images and depth facial image respectively and obtain four vectors, by this four
Vector is spliced into feature vector, will carry out living body judgement in this feature vector input In vivo detection classifier SVM trained in advance.
In embodiments of the present invention, colour imagery shot can be replaced directly with infrared camera, the coloured silk in aforesaid operations step
Chromatic graph picture can directly replace with infrared image.The spy of combination of the embodiment of the present invention color image (infrared image) and depth image
Point combines the technologies such as ad hoc rules, deep learning, machine learning, has higher accuracy of identification, can quickly carry out living body
Detection, and guarantee percent of pass and anti-fake rate.In vivo detection average time is 200 milliseconds on embedded platform RK3288,
In vivo detection average time less than 100ms, has good user experience under windows platform.And this programme is more practical, only
Using two cameras, depth information, visible light or infrared information can be obtained directly by existing depth camera product,
It is at low cost, while without adjusting multi-cam visual angle.
In embodiments of the present invention, it is shot coloured image by colour imagery shot, depth map is shot by depth camera
Picture;When detecting face in the color image, according to the color image and the depth image, pass through the first convolution
Neural network and the second convolutional neural networks obtain the corresponding feature vector of the face;According to the corresponding feature of the face to
Amount judges whether the face is living body by In vivo detection classifier trained in advance.The present invention only with two cameras,
In conjunction with the characteristics of color image and depth image, in conjunction with technologies such as deep learning and machine learning, face living body is substantially increased
Speed, percent of pass and the anti-fake rate of detection, and it is at low cost, precision is high.
Embodiment 2
Referring to Fig. 7, the embodiment of the invention provides a kind of static three-dimensional face living body detection device based on deep learning,
The device is for executing the static three-dimensional human face in-vivo detection method based on deep learning provided by above-described embodiment 1, the dress
It sets and includes:
Shooting module 20 shoots depth image by depth camera for shooting coloured image by colour imagery shot;
Vector obtains module 21, for when detecting face in color image, according to color image and depth image,
The corresponding feature vector of face is obtained by the first convolutional neural networks and the second convolutional neural networks;
Judgment module 22, for being sentenced by In vivo detection classifier trained in advance according to the corresponding feature vector of face
Whether disconnected face is living body.
Above-mentioned vector obtains module 21
Color image normalization unit obtains coloured silk for carrying out face cutting and normalized to the color image
Color facial image;
Depth image normalization unit obtains depth for carrying out face cutting and normalized to the depth image
Spend facial image;
Feature extraction unit, for passing through the first convolutional neural networks respectively to the colorized face images and the depth
Facial image carries out feature extraction, obtains the first color vectors and the first depth vector;It is distributed by the second convolutional neural networks
Feature extraction is carried out to the colorized face images and the depth facial image, obtain the second color vectors and the second depth to
Amount;
Concatenation unit, for first color vectors, second color vectors, the first depth vector sum institute
It states the second depth vector to be spliced, obtains the corresponding feature vector of the face.
Above-mentioned depth image normalization unit includes:
Subelement is obtained, is set for obtaining preset number face key point from the color image;From the depth
The corresponding key point depth value in each face key point position is obtained respectively in degree image;
Normalize subelement, for according to each key point depth value to the human face region in the depth image into
Row normalized obtains depth facial image.
Above-mentioned acquisition subelement, specifically for judging the depth value at face key point position described in the depth image
It whether is 0;If not, obtaining the depth value as the corresponding key point depth value in face key point position;If so,
The depth value for obtaining four consecutive points of face key point position carries out slotting according to the depth value of four consecutive points
Value, obtains the corresponding key point depth value in face key point position.
Above-mentioned normalization subelement, for cutting out human face region image from the depth image;It determines each described
Maximum depth value and minimum depth value in key point depth value;From the human face region image, determine that depth value is greater than
The first pixel and depth value of the maximum depth value are less than the second pixel of the minimum depth value;By described
The depth value of one pixel is revised as the maximum depth value, and by the depth value of second pixel be revised as it is described most
Small depth value;The depth value of each pixel in the human face region image is subtracted after the maximum depth value again divided by described
Difference between maximum depth value and the minimum depth value obtains depth facial image.
Concatenation unit, it is absolute for calculating the first difference between first color vectors and the first depth vector
Value calculates the second absolute difference between second color vectors and the second depth vector;It is colored by described first
Vector, the first depth vector, first absolute difference, second color vectors, the second depth vector and
Second absolute difference is spliced into the first splicing vector;The first splicing vector is divided into two parts, described in calculating
Third absolute difference between two parts;Third absolute difference described in the first splicing vector sum is spliced into the people
The corresponding feature vector of face.
In embodiments of the present invention, the device further include:
Classifier training module is obtained for shooting biopsy sample by the colour imagery shot and the depth camera
Take living body characteristics;Non-living body sample is shot by the colour imagery shot and the depth camera.Obtain non-live body characteristics;Root
Classifier is detected according to the living body characteristics and the non-live body characteristics training living body.
In embodiments of the present invention, it is shot coloured image by colour imagery shot, depth map is shot by depth camera
Picture;When detecting face in the color image, according to the color image and the depth image, pass through the first convolution
Neural network and the second convolutional neural networks obtain the corresponding feature vector of the face;According to the corresponding feature of the face to
Amount judges whether the face is living body by In vivo detection classifier trained in advance.The present invention only with two cameras,
In conjunction with the characteristics of color image and depth image, in conjunction with technologies such as deep learning and machine learning, face living body is substantially increased
Speed, percent of pass and the anti-fake rate of detection, and it is at low cost, precision is high.
Embodiment 3
The embodiment of the present invention provides a kind of static three-dimensional face In vivo detection equipment based on deep learning, which includes
One or more processors and storage device;Storage device is for storing one or more programs;When one or more of
When program is loaded and executed by one or more of processors, realize provided by above-described embodiment 1 based on deep learning
Static three-dimensional human face in-vivo detection method.
In embodiments of the present invention, it is shot coloured image by colour imagery shot, depth map is shot by depth camera
Picture;When detecting face in the color image, according to the color image and the depth image, pass through the first convolution
Neural network and the second convolutional neural networks obtain the corresponding feature vector of the face;According to the corresponding feature of the face to
Amount judges whether the face is living body by In vivo detection classifier trained in advance.The present invention only with two cameras,
In conjunction with the characteristics of color image and depth image, in conjunction with technologies such as deep learning and machine learning, face living body is substantially increased
Speed, percent of pass and the anti-fake rate of detection, and it is at low cost, precision is high.
Embodiment 4
The embodiment of the present invention provides a kind of computer readable storage medium, is stored thereon with computer program, the calculating
Machine program realizes that the static three-dimensional face provided by above-described embodiment 1 based on deep learning is living when being loaded and executed by processor
Body detecting method.
In embodiments of the present invention, it is shot coloured image by colour imagery shot, depth map is shot by depth camera
Picture;When detecting face in the color image, according to the color image and the depth image, pass through the first convolution
Neural network and the second convolutional neural networks obtain the corresponding feature vector of the face;According to the corresponding feature of the face to
Amount judges whether the face is living body by In vivo detection classifier trained in advance.The present invention only with two cameras,
In conjunction with the characteristics of color image and depth image, in conjunction with technologies such as deep learning and machine learning, face living body is substantially increased
Speed, percent of pass and the anti-fake rate of detection, and it is at low cost, precision is high.
It should be understood that
Algorithm and display do not have intrinsic phase with any certain computer, virtual bench or other equipment provided herein
It closes.Various fexible units can also be used together with teachings based herein.As described above, this kind of device is constructed to be wanted
The structure asked is obvious.In addition, the present invention is also not directed to any particular programming language.It should be understood that can use each
Kind programming language realizes summary of the invention described herein, and the description done above to language-specific is to disclose this
The preferred forms of invention.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention
Example can be practiced without these specific details.In some instances, well known method, structure is not been shown in detail
And technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of the various inventive aspects,
Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes
In example, figure or descriptions thereof.However, the disclosed method should not be interpreted as reflecting the following intention: i.e. required to protect
Shield the present invention claims features more more than feature expressly recited in each claim.More precisely, as following
Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore,
Thus the claims for following specific embodiment are expressly incorporated in the specific embodiment, wherein each claim itself
All as a separate embodiment of the present invention.
Those skilled in the art will understand that can be carried out adaptively to the module in the equipment in embodiment
Change and they are arranged in one or more devices different from this embodiment.It can be the module or list in embodiment
Member or component are combined into a module or unit or component, and furthermore they can be divided into multiple submodule or subelement or
Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it can use any
Combination is to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed
All process or units of what method or apparatus are combined.Unless expressly stated otherwise, this specification is (including adjoint power
Benefit require, abstract and attached drawing) disclosed in each feature can carry out generation with an alternative feature that provides the same, equivalent, or similar purpose
It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included certain features rather than other feature, but the combination of the feature of different embodiments mean it is of the invention
Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed
Meaning one of can in any combination mode come using.
Various component embodiments of the invention can be implemented in hardware, or to run on one or more processors
Software module realize, or be implemented in a combination thereof.It will be understood by those of skill in the art that can be used in practice
One in the creating device of microprocessor or digital signal processor (DSP) to realize virtual machine according to an embodiment of the present invention
The some or all functions of a little or whole components.The present invention is also implemented as executing method as described herein
Some or all device or device programs (for example, computer program and computer program product).Such realization
Program of the invention can store on a computer-readable medium, or may be in the form of one or more signals.This
The signal of sample can be downloaded from an internet website to obtain, and is perhaps provided on the carrier signal or mentions in any other forms
For.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and ability
Field technique personnel can be designed alternative embodiment without departing from the scope of the appended claims.In the claims,
Any reference symbol between parentheses should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not
Element or step listed in the claims.Word "a" or "an" located in front of the element does not exclude the presence of multiple such
Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real
It is existing.In the unit claims listing several devices, several in these devices can be through the same hardware branch
To embody.The use of word first, second, and third does not indicate any sequence.These words can be explained and be run after fame
Claim.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto,
In the technical scope disclosed by the present invention, any changes or substitutions that can be easily thought of by anyone skilled in the art,
It should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with the protection model of the claim
Subject to enclosing.
Claims (10)
1. a kind of static three-dimensional human face in-vivo detection method based on deep learning, which is characterized in that the described method includes:
It is shot coloured image by colour imagery shot, depth image is shot by depth camera;
When detecting face in the color image, according to the color image and the depth image, pass through the first volume
Product neural network and the second convolutional neural networks obtain the corresponding feature vector of the face;
According to the corresponding feature vector of the face, by In vivo detection classifier trained in advance judge the face whether be
Living body.
2. the method according to claim 1, wherein described according to the color image and the depth image,
The corresponding feature vector of the face is obtained by the first convolutional neural networks and the second convolutional neural networks, comprising:
Face cutting and normalized are carried out to the color image, obtain colorized face images;
Face cutting and normalized are carried out to the depth image, obtain depth facial image;
Feature extraction is carried out to the colorized face images and the depth facial image respectively by the first convolutional neural networks,
Obtain the first color vectors and the first depth vector;
Feature extraction is carried out to the colorized face images and the depth facial image by the distribution of the second convolutional neural networks,
Obtain the second color vectors and the second depth vector;
To the second depth vector described in first color vectors, second color vectors, the first depth vector sum into
Row splicing, obtains the corresponding feature vector of the face.
3. according to the method described in claim 2, it is characterized in that, described carry out face cutting and normalizing to the depth image
Change processing, obtains depth facial image, comprising:
Preset number face key point is obtained from the color image to set;
Obtain the corresponding key point depth value in each face key point position respectively from the depth image;
The human face region in the depth image is normalized according to each key point depth value, obtains depth
Facial image.
4. according to the method described in claim 3, it is characterized in that, it is described obtained respectively from the depth image it is each described
The corresponding key point depth value in face key point position, comprising:
Judge whether the depth value at face key point position described in the depth image is 0;
If not, obtaining the depth value as the corresponding key point depth value in face key point position;
If so, the depth value of four consecutive points of face key point position is obtained, according to the depth of four consecutive points
Angle value carries out interpolation, obtains the corresponding key point depth value in face key point position.
5. according to the method described in claim 3, it is characterized in that, it is described according to each key point depth value to the depth
Human face region in degree image is normalized, and obtains depth facial image, comprising:
Human face region image is cut out from the depth image;
Determine the maximum depth value and minimum depth value in each key point depth value;
From the human face region image, determine that depth value is greater than the first pixel and depth of the maximum depth value
Value is less than the second pixel of the minimum depth value;
The depth value of first pixel is revised as the maximum depth value, and by the depth value of second pixel
It is revised as the minimum depth value;
The depth value of each pixel in the human face region image is subtracted after the maximum depth value again divided by the maximum
Difference between depth value and the minimum depth value obtains depth facial image.
6. according to the method described in claim 2, it is characterized in that, it is described to first color vectors, it is described second colored
Second depth vector described in vector, the first depth vector sum is spliced, and the corresponding feature vector of the face is obtained, packet
It includes:
The first absolute difference between first color vectors and the first depth vector is calculated, it is color to calculate described second
The second absolute difference between color vector and the second depth vector;
By first color vectors, the first depth vector, first absolute difference, second color vectors,
The second depth vector and second absolute difference are spliced into the first splicing vector;
The first splicing vector is divided into two parts, calculates the third absolute difference between described two parts;
Third absolute difference described in the first splicing vector sum is spliced into the corresponding feature vector of the face.
7. method according to claim 1-6, which is characterized in that it is described according to the corresponding feature of the face to
Amount, before judging whether the face is living body by In vivo detection classifier trained in advance, further includes:
Biopsy sample is shot by the colour imagery shot and the depth camera, obtains living body characteristics;
Non-living body sample is shot by the colour imagery shot and the depth camera.Obtain non-live body characteristics;
Classifier is detected according to the living body characteristics and the non-live body characteristics training living body.
8. a kind of static three-dimensional face living body detection device based on deep learning, which is characterized in that described device includes:
Shooting module shoots depth image by depth camera for shooting coloured image by colour imagery shot;
Vector obtains module, for when detecting face in the color image, according to the color image and the depth
Image is spent, obtains the corresponding feature vector of the face by the first convolutional neural networks and the second convolutional neural networks;
Judgment module, for being judged by In vivo detection classifier trained in advance according to the corresponding feature vector of the face
Whether the face is living body.
9. a kind of static three-dimensional face In vivo detection equipment based on deep learning, which is characterized in that including
One or more processors;
Storage device, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
The now method as described in any in claim 1-7.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
The method as described in any in claim 1-7 is realized when being executed by processor.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811296335.7A CN109711243B (en) | 2018-11-01 | 2018-11-01 | Static three-dimensional face in-vivo detection method based on deep learning |
PCT/CN2019/114677 WO2020088588A1 (en) | 2018-11-01 | 2019-10-31 | Deep learning-based static three-dimensional method for detecting whether face belongs to living body |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811296335.7A CN109711243B (en) | 2018-11-01 | 2018-11-01 | Static three-dimensional face in-vivo detection method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109711243A true CN109711243A (en) | 2019-05-03 |
CN109711243B CN109711243B (en) | 2021-02-09 |
Family
ID=66254862
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811296335.7A Active CN109711243B (en) | 2018-11-01 | 2018-11-01 | Static three-dimensional face in-vivo detection method based on deep learning |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109711243B (en) |
WO (1) | WO2020088588A1 (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110287672A (en) * | 2019-06-27 | 2019-09-27 | 深圳市商汤科技有限公司 | Verification method and device, electronic equipment and storage medium |
CN110298281A (en) * | 2019-06-20 | 2019-10-01 | 汉王科技股份有限公司 | Video structural method, apparatus, electronic equipment and storage medium |
CN110348319A (en) * | 2019-06-18 | 2019-10-18 | 武汉大学 | A kind of face method for anti-counterfeit merged based on face depth information and edge image |
CN110472519A (en) * | 2019-07-24 | 2019-11-19 | 杭州晟元数据安全技术股份有限公司 | A kind of human face in-vivo detection method based on multi-model |
CN110580454A (en) * | 2019-08-21 | 2019-12-17 | 北京的卢深视科技有限公司 | Living body detection method and device |
CN110969077A (en) * | 2019-09-16 | 2020-04-07 | 成都恒道智融信息技术有限公司 | Living body detection method based on color change |
CN111091063A (en) * | 2019-11-20 | 2020-05-01 | 北京迈格威科技有限公司 | Living body detection method, device and system |
WO2020088588A1 (en) * | 2018-11-01 | 2020-05-07 | 长沙小钴科技有限公司 | Deep learning-based static three-dimensional method for detecting whether face belongs to living body |
CN111160309A (en) * | 2019-12-31 | 2020-05-15 | 深圳云天励飞技术有限公司 | Image processing method and related equipment |
CN111191521A (en) * | 2019-12-11 | 2020-05-22 | 智慧眼科技股份有限公司 | Face living body detection method and device, computer equipment and storage medium |
CN111414864A (en) * | 2020-03-23 | 2020-07-14 | 深圳云天励飞技术有限公司 | Face living body detection method and related device |
CN111539311A (en) * | 2020-04-21 | 2020-08-14 | 上海锘科智能科技有限公司 | Living body distinguishing method, device and system based on IR and RGB double photographing |
CN111666917A (en) * | 2020-06-19 | 2020-09-15 | 北京市商汤科技开发有限公司 | Attitude detection and video processing method and device, electronic equipment and storage medium |
CN112001914A (en) * | 2020-08-31 | 2020-11-27 | 三星(中国)半导体有限公司 | Depth image completion method and device |
CN112052830A (en) * | 2020-09-25 | 2020-12-08 | 北京百度网讯科技有限公司 | Face detection method, device and computer storage medium |
CN112052832A (en) * | 2020-09-25 | 2020-12-08 | 北京百度网讯科技有限公司 | Face detection method, device and computer storage medium |
CN112102223A (en) * | 2019-06-18 | 2020-12-18 | 通用电气精准医疗有限责任公司 | Method and system for automatically setting scanning range |
CN112487922A (en) * | 2020-11-25 | 2021-03-12 | 奥比中光科技集团股份有限公司 | Multi-mode face in-vivo detection method and system |
CN112487921A (en) * | 2020-11-25 | 2021-03-12 | 奥比中光科技集团股份有限公司 | Face image preprocessing method and system for living body detection |
CN112580434A (en) * | 2020-11-25 | 2021-03-30 | 奥比中光科技集团股份有限公司 | Face false detection optimization method and system based on depth camera and face detection equipment |
CN112861586A (en) * | 2019-11-27 | 2021-05-28 | 马上消费金融股份有限公司 | Living body detection, image classification and model training method, device, equipment and medium |
CN113449623A (en) * | 2021-06-21 | 2021-09-28 | 浙江康旭科技有限公司 | Light living body detection method based on deep learning |
CN113469036A (en) * | 2021-06-30 | 2021-10-01 | 北京市商汤科技开发有限公司 | Living body detection method and apparatus, electronic device, and storage medium |
CN113536843A (en) * | 2020-04-16 | 2021-10-22 | 上海大学 | Anti-counterfeiting face recognition system based on multi-mode fusion convolutional neural network |
JP2022541709A (en) * | 2020-06-19 | 2022-09-27 | ベイジン・センスタイム・テクノロジー・デベロップメント・カンパニー・リミテッド | Attitude detection and video processing method, device, electronic device and storage medium |
US11508188B2 (en) * | 2020-04-16 | 2022-11-22 | Samsung Electronics Co., Ltd. | Method and apparatus for testing liveness |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111814574B (en) * | 2020-06-12 | 2023-09-15 | 浙江大学 | Face living body detection system, terminal and storage medium applying double-branch three-dimensional convolution model |
CN112883940A (en) * | 2021-04-13 | 2021-06-01 | 深圳市赛为智能股份有限公司 | Silent in-vivo detection method, silent in-vivo detection device, computer equipment and storage medium |
CN113780222B (en) * | 2021-09-17 | 2024-02-27 | 深圳市繁维科技有限公司 | Face living body detection method and device, electronic equipment and readable storage medium |
CN116543001B (en) * | 2023-05-26 | 2024-01-12 | 广州工程技术职业学院 | Color image edge detection method and device, equipment and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102034097A (en) * | 2010-12-21 | 2011-04-27 | 中国科学院半导体研究所 | Method for recognizing human face by comprehensively utilizing front and lateral images |
CN103679118A (en) * | 2012-09-07 | 2014-03-26 | 汉王科技股份有限公司 | Human face in-vivo detection method and system |
CN104899579A (en) * | 2015-06-29 | 2015-09-09 | 小米科技有限责任公司 | Face recognition method and face recognition device |
CN105095833A (en) * | 2014-05-08 | 2015-11-25 | 中国科学院声学研究所 | Network constructing method for human face identification, identification method and system |
CN106650670A (en) * | 2016-12-27 | 2017-05-10 | 北京邮电大学 | Method and device for detection of living body face video |
CN107862299A (en) * | 2017-11-28 | 2018-03-30 | 电子科技大学 | A kind of living body faces detection method based on near-infrared Yu visible ray binocular camera |
CN108549886A (en) * | 2018-06-29 | 2018-09-18 | 汉王科技股份有限公司 | A kind of human face in-vivo detection method and device |
US20180276488A1 (en) * | 2017-03-27 | 2018-09-27 | Samsung Electronics Co., Ltd. | Liveness test method and apparatus |
CA3060340A1 (en) * | 2017-04-21 | 2018-10-25 | SITA Advanced Travel Solutions Limited | Detection system, detection device and method therefor |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10977509B2 (en) * | 2017-03-27 | 2021-04-13 | Samsung Electronics Co., Ltd. | Image processing method and apparatus for object detection |
CN108171212A (en) * | 2018-01-19 | 2018-06-15 | 百度在线网络技术(北京)有限公司 | For detecting the method and apparatus of target |
CN108388889B (en) * | 2018-03-23 | 2022-02-18 | 百度在线网络技术(北京)有限公司 | Method and device for analyzing face image |
CN109711243B (en) * | 2018-11-01 | 2021-02-09 | 长沙小钴科技有限公司 | Static three-dimensional face in-vivo detection method based on deep learning |
-
2018
- 2018-11-01 CN CN201811296335.7A patent/CN109711243B/en active Active
-
2019
- 2019-10-31 WO PCT/CN2019/114677 patent/WO2020088588A1/en active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102034097A (en) * | 2010-12-21 | 2011-04-27 | 中国科学院半导体研究所 | Method for recognizing human face by comprehensively utilizing front and lateral images |
CN103679118A (en) * | 2012-09-07 | 2014-03-26 | 汉王科技股份有限公司 | Human face in-vivo detection method and system |
CN105095833A (en) * | 2014-05-08 | 2015-11-25 | 中国科学院声学研究所 | Network constructing method for human face identification, identification method and system |
CN104899579A (en) * | 2015-06-29 | 2015-09-09 | 小米科技有限责任公司 | Face recognition method and face recognition device |
CN106650670A (en) * | 2016-12-27 | 2017-05-10 | 北京邮电大学 | Method and device for detection of living body face video |
US20180276488A1 (en) * | 2017-03-27 | 2018-09-27 | Samsung Electronics Co., Ltd. | Liveness test method and apparatus |
CA3060340A1 (en) * | 2017-04-21 | 2018-10-25 | SITA Advanced Travel Solutions Limited | Detection system, detection device and method therefor |
CN107862299A (en) * | 2017-11-28 | 2018-03-30 | 电子科技大学 | A kind of living body faces detection method based on near-infrared Yu visible ray binocular camera |
CN108549886A (en) * | 2018-06-29 | 2018-09-18 | 汉王科技股份有限公司 | A kind of human face in-vivo detection method and device |
Non-Patent Citations (2)
Title |
---|
ANDREA LAGORIO等: "Liveness detection based on 3D face shape analysis", 《2013 INTERNATIONAL WORKSHOP ON BIOMETRICS AND FORENSICS (IWBF)》 * |
甘俊英等: "基于 3D 卷积神经网络的活体人脸检测", 《信号处理》 * |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020088588A1 (en) * | 2018-11-01 | 2020-05-07 | 长沙小钴科技有限公司 | Deep learning-based static three-dimensional method for detecting whether face belongs to living body |
CN112102223B (en) * | 2019-06-18 | 2024-05-14 | 通用电气精准医疗有限责任公司 | Method and system for automatically setting scan range |
CN110348319A (en) * | 2019-06-18 | 2019-10-18 | 武汉大学 | A kind of face method for anti-counterfeit merged based on face depth information and edge image |
CN112102223A (en) * | 2019-06-18 | 2020-12-18 | 通用电气精准医疗有限责任公司 | Method and system for automatically setting scanning range |
CN110298281A (en) * | 2019-06-20 | 2019-10-01 | 汉王科技股份有限公司 | Video structural method, apparatus, electronic equipment and storage medium |
CN110298281B (en) * | 2019-06-20 | 2021-10-12 | 汉王科技股份有限公司 | Video structuring method and device, electronic equipment and storage medium |
CN110287672A (en) * | 2019-06-27 | 2019-09-27 | 深圳市商汤科技有限公司 | Verification method and device, electronic equipment and storage medium |
CN110472519A (en) * | 2019-07-24 | 2019-11-19 | 杭州晟元数据安全技术股份有限公司 | A kind of human face in-vivo detection method based on multi-model |
CN110472519B (en) * | 2019-07-24 | 2021-10-29 | 杭州晟元数据安全技术股份有限公司 | Human face in-vivo detection method based on multiple models |
CN110580454A (en) * | 2019-08-21 | 2019-12-17 | 北京的卢深视科技有限公司 | Living body detection method and device |
CN110969077A (en) * | 2019-09-16 | 2020-04-07 | 成都恒道智融信息技术有限公司 | Living body detection method based on color change |
CN111091063A (en) * | 2019-11-20 | 2020-05-01 | 北京迈格威科技有限公司 | Living body detection method, device and system |
CN111091063B (en) * | 2019-11-20 | 2023-12-29 | 北京迈格威科技有限公司 | Living body detection method, device and system |
CN112861586A (en) * | 2019-11-27 | 2021-05-28 | 马上消费金融股份有限公司 | Living body detection, image classification and model training method, device, equipment and medium |
CN111191521A (en) * | 2019-12-11 | 2020-05-22 | 智慧眼科技股份有限公司 | Face living body detection method and device, computer equipment and storage medium |
CN111191521B (en) * | 2019-12-11 | 2022-08-12 | 智慧眼科技股份有限公司 | Face living body detection method and device, computer equipment and storage medium |
CN111160309A (en) * | 2019-12-31 | 2020-05-15 | 深圳云天励飞技术有限公司 | Image processing method and related equipment |
CN111414864A (en) * | 2020-03-23 | 2020-07-14 | 深圳云天励飞技术有限公司 | Face living body detection method and related device |
CN111414864B (en) * | 2020-03-23 | 2024-03-26 | 深圳云天励飞技术有限公司 | Face living body detection method and related device |
US11836235B2 (en) | 2020-04-16 | 2023-12-05 | Samsung Electronics Co., Ltd. | Method and apparatus for testing liveness |
CN113536843B (en) * | 2020-04-16 | 2023-07-14 | 上海大学 | Anti-fake face recognition system based on multimode fusion convolutional neural network |
US11508188B2 (en) * | 2020-04-16 | 2022-11-22 | Samsung Electronics Co., Ltd. | Method and apparatus for testing liveness |
CN113536843A (en) * | 2020-04-16 | 2021-10-22 | 上海大学 | Anti-counterfeiting face recognition system based on multi-mode fusion convolutional neural network |
CN111539311B (en) * | 2020-04-21 | 2024-03-01 | 上海锘科智能科技有限公司 | Living body judging method, device and system based on IR and RGB double shooting |
CN111539311A (en) * | 2020-04-21 | 2020-08-14 | 上海锘科智能科技有限公司 | Living body distinguishing method, device and system based on IR and RGB double photographing |
JP2022541709A (en) * | 2020-06-19 | 2022-09-27 | ベイジン・センスタイム・テクノロジー・デベロップメント・カンパニー・リミテッド | Attitude detection and video processing method, device, electronic device and storage medium |
CN111666917A (en) * | 2020-06-19 | 2020-09-15 | 北京市商汤科技开发有限公司 | Attitude detection and video processing method and device, electronic equipment and storage medium |
CN112001914A (en) * | 2020-08-31 | 2020-11-27 | 三星(中国)半导体有限公司 | Depth image completion method and device |
CN112001914B (en) * | 2020-08-31 | 2024-03-01 | 三星(中国)半导体有限公司 | Depth image complement method and device |
CN112052832A (en) * | 2020-09-25 | 2020-12-08 | 北京百度网讯科技有限公司 | Face detection method, device and computer storage medium |
CN112052830A (en) * | 2020-09-25 | 2020-12-08 | 北京百度网讯科技有限公司 | Face detection method, device and computer storage medium |
CN112487921A (en) * | 2020-11-25 | 2021-03-12 | 奥比中光科技集团股份有限公司 | Face image preprocessing method and system for living body detection |
CN112487922A (en) * | 2020-11-25 | 2021-03-12 | 奥比中光科技集团股份有限公司 | Multi-mode face in-vivo detection method and system |
CN112487921B (en) * | 2020-11-25 | 2023-09-08 | 奥比中光科技集团股份有限公司 | Face image preprocessing method and system for living body detection |
CN112580434B (en) * | 2020-11-25 | 2024-03-15 | 奥比中光科技集团股份有限公司 | Face false detection optimization method and system based on depth camera and face detection equipment |
CN112580434A (en) * | 2020-11-25 | 2021-03-30 | 奥比中光科技集团股份有限公司 | Face false detection optimization method and system based on depth camera and face detection equipment |
CN112487922B (en) * | 2020-11-25 | 2024-05-07 | 奥比中光科技集团股份有限公司 | Multi-mode human face living body detection method and system |
CN113449623A (en) * | 2021-06-21 | 2021-09-28 | 浙江康旭科技有限公司 | Light living body detection method based on deep learning |
CN113469036A (en) * | 2021-06-30 | 2021-10-01 | 北京市商汤科技开发有限公司 | Living body detection method and apparatus, electronic device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2020088588A1 (en) | 2020-05-07 |
CN109711243B (en) | 2021-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109711243A (en) | A kind of static three-dimensional human face in-vivo detection method based on deep learning | |
CN105023010B (en) | A kind of human face in-vivo detection method and system | |
CN105933589B (en) | A kind of image processing method and terminal | |
CN105338887B (en) | The sensory evaluation device of skin and the evaluation method of skin | |
JP5361987B2 (en) | Automatic facial detection and identity masking in images and its applications | |
CN108876833A (en) | Image processing method, image processing apparatus and computer readable storage medium | |
CN105404888B (en) | The conspicuousness object detection method of color combining and depth information | |
CN108416902A (en) | Real-time object identification method based on difference identification and device | |
CN103827920B (en) | It is determined according to the object distance of image | |
CN108229331A (en) | Face false-proof detection method and system, electronic equipment, program and medium | |
CN107111743A (en) | The vital activity tracked using gradual eyelid is detected | |
CN106845414A (en) | For the method and system of the quality metric of biological characteristic validation | |
CN109522790A (en) | Human body attribute recognition approach, device, storage medium and electronic equipment | |
CN106851238A (en) | Method for controlling white balance, white balance control device and electronic installation | |
CN110032915A (en) | A kind of human face in-vivo detection method, device and electronic equipment | |
CN108230245A (en) | Image split-joint method, image splicing device and electronic equipment | |
CN108491848A (en) | Image significance detection method based on depth information and device | |
CN113205057A (en) | Face living body detection method, device, equipment and storage medium | |
CN107018323A (en) | Control method, control device and electronic installation | |
CN112802081B (en) | Depth detection method and device, electronic equipment and storage medium | |
EP2257924B1 (en) | Method for generating a density image of an observation zone | |
KR20110103571A (en) | Apparatus and method imaging through hole of each pixels of display panel | |
KR20150128510A (en) | Apparatus and method for liveness test, and apparatus and method for image processing | |
Qiao et al. | Source camera device identification based on raw images | |
CN110532746A (en) | Face method of calibration, device, server and readable storage medium storing program for executing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |