CN110059579A - For the method and apparatus of test alive, electronic equipment and storage medium - Google Patents

For the method and apparatus of test alive, electronic equipment and storage medium Download PDF

Info

Publication number
CN110059579A
CN110059579A CN201910239825.1A CN201910239825A CN110059579A CN 110059579 A CN110059579 A CN 110059579A CN 201910239825 A CN201910239825 A CN 201910239825A CN 110059579 A CN110059579 A CN 110059579A
Authority
CN
China
Prior art keywords
point cloud
dimensional point
matrix
dimensional
checked
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910239825.1A
Other languages
Chinese (zh)
Other versions
CN110059579B (en
Inventor
邱迪
唐宇晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN201910239825.1A priority Critical patent/CN110059579B/en
Publication of CN110059579A publication Critical patent/CN110059579A/en
Application granted granted Critical
Publication of CN110059579B publication Critical patent/CN110059579B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

Purpose of this disclosure is to provide a kind of method and apparatus for test alive, electronic equipment and storage medium, to solve the problems, such as that recognition of face test alive is not accurate enough in the related technology.It include: the three dimensional point cloud matrix that subjects face to be checked is obtained by three-dimensional camera mould group, wherein the three dimensional point cloud matrix includes the three-dimensional coordinate information and image parameters information of each sampled point of subjects face to be checked;The space transform models that the three-dimensional point cloud Input matrix is trained in advance, three dimensional point cloud matrix after being corrected, wherein, the space transform models are trained to coordinate distribution characteristics when being in the default pose through manual synchronizing by the coordinate distribution characteristics and the sample three dimensional point cloud matrix that are not in the sample three dimensional point cloud matrix of default pose and are obtained;According to the three dimensional point cloud matrix after the correction, and In vivo detection model trained in advance, determine whether the object to be checked is living body.

Description

For the method and apparatus of test alive, electronic equipment and storage medium
Technical field
This disclosure relates to technical field of data processing, and in particular, to a kind of method and apparatus for test alive, electricity Sub- equipment and storage medium.
Background technique
With the development of science and technology, the promotion of data-handling efficiency, change with rapid changepl. never-ending changes and improvements occurs for the authentication mode of legal identity Change.In the related technology, it proposes through the biological characteristic of acquisition person to be verified and carries out the scheme of user's legal identity verifying, this A little biological characteristics can be fingerprint characteristic, be also possible to facial characteristics of people etc..
Other biological characteristics (fingerprint, iris etc.) of face and human body are equally inherent, its uniqueness and be not easy by The superperformance of duplication provides necessary premise for identity identification.Compared with other types of bio-identification, recognition of face tool Have untouchable, that is to say, that user does not need directly to contact with equipment, and equipment can obtain facial image.In addition, in reality The sorting, judgement and identification of multiple faces can be carried out under application scenarios.
However, the application scenarios with recognition of face increase, there are the faceforms such as criminal's benefit mask, 3D face mould, Or it plays user's face image and pretends to be legitimate user, the cases such as illegal invasion legitimate user's account.Whether know to face is living Body becomes the important composition part of recognition of face.
Summary of the invention
Purpose of this disclosure is to provide a kind of method and apparatus for test alive, electronic equipment and storage medium, with Solve the problems, such as that recognition of face test alive is not accurate enough in the related technology.
To achieve the above object, in a first aspect, the disclosure provides a kind of method for test alive, the method packet It includes:
The three dimensional point cloud matrix of subjects face to be checked is obtained by three-dimensional camera mould group, wherein the three-dimensional point cloud Data matrix includes the three-dimensional coordinate information and image parameters information of each sampled point of subjects face to be checked;
The space transform models that the three-dimensional point cloud Input matrix is trained in advance, the three dimensional point cloud after being corrected Matrix, wherein the space transform models are distributed by being not in the coordinate of the sample three dimensional point cloud matrix of default pose Feature and the sample three dimensional point cloud matrix are through manual synchronizing to coordinate distribution characteristics when being in the default pose Training obtains;
According to the three dimensional point cloud matrix after the correction, and In vivo detection model trained in advance, determine described in Whether object to be checked is living body.
Optionally, the In vivo detection model is convolutional neural networks model;The three-dimensional point according to after the correction Cloud data matrix, and In vivo detection model trained in advance, determine whether the object to be checked is living body, comprising:
By convolutional neural networks model trained in advance, feature is extracted from the three dimensional point cloud matrix after the correction Matrix;
Calculate the variance yields of the eigenmatrix;
Judge whether the variance yields is located within the scope of preset variance, wherein the preset variance range is basis What the variance yields that the eigenmatrix of corresponding biopsy sample face is calculated determined;
If the variance yields is located within the scope of preset variance, determine that the object to be checked is living body.
Optionally, the variance yields that the eigenmatrix is calculated, comprising:
It is calculated by the following formula to obtain variance yields σ2:
Wherein, X indicates the either element in the eigenmatrix;μ indicates the mean value of all elements in the eigenmatrix; N indicates the sum of the element in the eigenmatrix.
Optionally, the space transform models that the three-dimensional point cloud Input matrix is trained in advance, after being corrected Three dimensional point cloud matrix, comprising:
By the three-dimensional coordinate information in the three dimensional point cloud matrix, input space transform models trained in advance are obtained Space conversion matrices;Wherein, the space transform models are sat by being not in the sample three dimensional point cloud matrix of default pose Mark the coordinate distribution of distribution characteristics and the sample three dimensional point cloud matrix after manual synchronizing is in the default pose Feature training obtains;
Coordinates correction is carried out to the three dimensional point cloud matrix by the space conversion matrices, three after being corrected Tie up point cloud data matrix.
Optionally, the default pose is the pose that characterization face is in positive angle.
Optionally, described that coordinates correction is carried out to the three dimensional point cloud matrix by the space conversion matrices, it obtains Three dimensional point cloud matrix after to correction, comprising:
Coordinates correction is carried out to the three dimensional point cloud matrix by following formula:
Wherein,For input quantity;Indicate the three-dimensional coordinate of three dimensional point cloud matrix before correcting; AθIndicate the space conversion matrices,It is indicated for the expansion of the space conversion matrices; For output quantity;Indicate the three-dimensional coordinate of three dimensional point cloud matrix after correcting.
Optionally, the image parameters information includes following arbitrary parameter:
Color parameter, reflected intensity parameter, temperature parameter.
Optionally, it is described the three dimensional point cloud matrix of subjects face to be checked is obtained by three-dimensional camera mould group before, The method also includes:
Determine the subjects face to be checked at a distance from the three-dimensional camera mould group by range sensor;
If the distance is less than pre-determined distance threshold value, the three-dimensional point cloud of subjects face to be checked is obtained by three-dimensional camera mould group Data matrix;
If the distance is not less than the pre-determined distance threshold value, sending is used to indicate the object to be checked close to the three-dimensional The prompt information of camera mould group.
Second aspect, the disclosure provide a kind of device for test alive, and described device includes:
Point cloud obtains module, for obtaining the three dimensional point cloud matrix of subjects face to be checked by three-dimensional camera mould group, Wherein, the three dimensional point cloud matrix includes the three-dimensional coordinate information and image of each sampled point of subjects face to be checked Parameter information;
Point cloud rectification module, the space transform models for training the three-dimensional point cloud Input matrix in advance are rectified Three dimensional point cloud matrix after just, wherein the space transform models are by being not in the sample three-dimensional point cloud of default pose The coordinate distribution characteristics of data matrix and the sample three dimensional point cloud matrix are through manual synchronizing to being in the default position Coordinate distribution characteristics training when appearance obtains;
Determining module, for according to the three dimensional point cloud matrix after the correction, and In vivo detection trained in advance Model determines whether the object to be checked is living body.
Optionally, the In vivo detection model is convolutional neural networks model, and the determining module is used for:
By convolutional neural networks model trained in advance, feature is extracted from the three dimensional point cloud matrix after the correction Matrix;
Calculate the variance yields of the eigenmatrix;
Judge whether the variance yields is located within the scope of preset variance, wherein the preset variance range is basis What the variance yields that the eigenmatrix of corresponding biopsy sample face is calculated determined;
If the variance yields is located within the scope of preset variance, determine that the object to be checked is living body.
Optionally, the determining module obtains variance yields σ for being calculated by the following formula2:
Wherein, X indicates the either element in the eigenmatrix;μ indicates the mean value of all elements in the eigenmatrix; N indicates the sum of the element in the eigenmatrix.
Optionally, described cloud rectification module, is used for:
By the three-dimensional coordinate information in the three dimensional point cloud matrix, input space transform models trained in advance are obtained Space conversion matrices;Wherein, the space transform models are sat by being not in the sample three dimensional point cloud matrix of default pose Mark the coordinate distribution of distribution characteristics and the sample three dimensional point cloud matrix after manual synchronizing is in the default pose Feature training obtains;
Coordinates correction is carried out to the three dimensional point cloud matrix by the space conversion matrices, three after being corrected Tie up point cloud data matrix.
Optionally, the default pose is the pose that characterization face is in positive angle.
Optionally, described cloud rectification module, for being sat by following formula to the three dimensional point cloud matrix Calibration is being just:
Wherein,For input quantity;Indicate the three-dimensional coordinate of three dimensional point cloud matrix before correcting; AθIndicate the space conversion matrices,It is indicated for the expansion of the space conversion matrices; For output quantity;Indicate the three-dimensional coordinate of three dimensional point cloud matrix after correcting.
Optionally, the image parameters information includes following arbitrary parameter: color parameter, reflected intensity parameter, temperature ginseng Number.
Optionally, described device further includes cue module, for obtaining object to be checked by three-dimensional camera mould group described Before the three dimensional point cloud matrix of face, following operation is executed:
Determine the subjects face to be checked at a distance from the three-dimensional camera mould group by range sensor;
If the distance is less than pre-determined distance threshold value, the three-dimensional point cloud of subjects face to be checked is obtained by three-dimensional camera mould group Data matrix;
If the distance is not less than the pre-determined distance threshold value, sending is used to indicate the object to be checked close to the three-dimensional The prompt information of camera mould group.
The third aspect, the disclosure provide a kind of computer readable storage medium, are stored thereon with computer program, the program The step of method that test alive is used for described in any one is realized when being executed by processor.
Fourth aspect, the disclosure provide a kind of electronic equipment, comprising:
Memory is stored thereon with computer program;
Processor, for executing the computer program in the memory, to realize described in any one for living body The step of method of inspection.
Above-mentioned skill scheme, can at least reach following technical effect:
Obtain the three dimensional point cloud matrix of subjects face to be checked by three-dimensional camera mould group, then by the three-dimensional point cloud square Battle array input space transform models trained in advance, the three dimensional point cloud matrix after being corrected, further, according to the school Three dimensional point cloud matrix after just, and In vivo detection model trained in advance determine whether the object to be checked is living body, Wherein, three dimensional point cloud matrix is directly acquired by three-dimensional camera mould group, and then subsequent based on the inspection of three dimensional point cloud matrix Living body is tested, loss of data can be reduced compared with the intermediate treatment link of the reduction data of degree, guarantee subsequent In vivo detection model Input the accuracy of information.Above-mentioned three-dimensional camera mould group, which is once shot, can obtain three dimensional point cloud matrix, data acquisition It is more efficient, the data input time of user is reduced, more fast, furthermore it is possible to the not noticeable property of face In vivo detection is promoted, It can more widely be applied to take precautions against illegal face mould attack.Also, with the simple side for carrying out In vivo detection using depth map Case is compared, and the parameter information that may include in point cloud is more, significantly more efficient can prevent the attack of 3D face mould.
Other feature and advantage of the disclosure will the following detailed description will be given in the detailed implementation section.
Detailed description of the invention
Attached drawing is and to constitute part of specification for providing further understanding of the disclosure, with following tool Body embodiment is used to explain the disclosure together, but does not constitute the limitation to the disclosure.In the accompanying drawings:
Fig. 1 is a kind of method flow diagram for test alive shown according to an exemplary embodiment.
Fig. 2 is a kind of implement scene schematic diagram of method for test alive shown according to an exemplary embodiment.
Fig. 3 is a kind of schematic illustration shown according to an exemplary embodiment.
Fig. 4 is another schematic illustration shown according to an exemplary embodiment.
Fig. 5 is another method flow diagram for being used for test alive shown according to an exemplary embodiment.
Fig. 6 is another method flow diagram for being used for test alive shown according to an exemplary embodiment.
Fig. 7 is a kind of device block diagram for test alive shown according to an exemplary embodiment.
Fig. 8 is a kind of electronic equipment block diagram shown according to an exemplary embodiment.
Specific embodiment
It is described in detail below in conjunction with specific embodiment of the attached drawing to the disclosure.It should be understood that this place is retouched The specific embodiment stated is only used for describing and explaining the disclosure, is not limited to the disclosure.
Fig. 1 is a kind of method flow diagram for test alive shown according to an exemplary embodiment, the method packet It includes:
S11 obtains the three dimensional point cloud matrix of subjects face to be checked by three-dimensional camera mould group, wherein the three-dimensional Point cloud data matrix includes the three-dimensional coordinate information and image parameters information of each sampled point of subjects face to be checked.
Fig. 2 is the schematic diagram that the three dimensional point cloud matrix of subjects face to be checked is obtained by three-dimensional camera mould group.Such as Fig. 2 It is shown, equipped with three-dimensional camera mould group in terminal.The three-dimensional camera mould group includes Dot Projector (the dot matrix projector), Front Camera (front camera), Receiver (receiver), Ranging Sensor (range sensor), Infrared Sensor (infrared light transducer[sensor), Flood Illuminator (washers).Structure in mould group is worked in coordination, and is completed The projection of dot matrix and the collecting work of point cloud data.
Specifically, washers (Flood Illuminator) use the vertical cavity surface emitting laser (VCSEL) of low-power (Vertical Cavity Surface Emitting Laser, VCSEL), launches " non-structural " (Non-structured) Infrared light be incident upon subject surface to be checked.Ranging Sensor (range sensor) uses the vertical cavity surface of low-power -emitting laser emits infrared ray laser, when have object close to when can reflection laser light, further known to object to be checked close to eventually End.Dot Projector (the dot matrix projector) emits infrared ray laser, warp using high-power vertical cavity surface emitting laser (VCSEL) By internal wafer scale optics (Wafer Level Optics, WLO), diffractive optical element (Diffractive Optical Elements, DOE) etc. structures, generate about 30,000 " structure " (Structured) luminous points project subject surface to be checked, benefit Array is formed by with these luminous points and is reflected back infrared light camera (Infrared camera), calculates face's different location Distance (depth).
In the specific implementation, the image parameters information includes following arbitrary parameter: color parameter, reflected intensity parameter, Temperature parameter.That is, it also may include this that each point, which includes three-dimensional coordinate, in the point cloud of the face collected The collected color parameter value of point, reflection intensity values, temperature parameter value.For example, obtain points be N, every bit include (x, y, Z) measurements of the chest, waist and hips coordinate data (3 dimensions), further includes M kind parameter.So in fact, obtained three dimensional point cloud matrix is N* The matrix of (3+M).
In a kind of optional embodiment, in the three-dimensional point for obtaining subjects face to be checked by three-dimensional camera mould group Before cloud data matrix, the method also includes: the subjects face to be checked and the three-dimensional phase are determined by range sensor The distance of machine mould group;If the distance is less than pre-determined distance threshold value, the three of subjects face to be checked is obtained by three-dimensional camera mould group Tie up point cloud data matrix;If the distance is not less than the pre-determined distance threshold value, it is close that sending is used to indicate the object to be checked The prompt information of the three-dimensional camera mould group.
And then object to be checked can shorten the distance between three-dimensional camera mould group according to instruction, can make three-dimensional phase in this way Machine mould group obtains more accurate three dimensional point cloud matrix.
S12, the space transform models that the three-dimensional point cloud Input matrix is trained in advance, the three-dimensional point cloud after being corrected Data matrix, wherein the space transform models are by being not in the coordinate of the sample three dimensional point cloud matrix of default pose Distribution characteristics and the sample three dimensional point cloud matrix are distributed through manual synchronizing to coordinate when being in the default pose Feature training obtains.
Object to be checked is often through the relative position between range estimation and three-dimensional camera mould group, to assess itself and three-dimensional camera Positional relationship in mould group between each sensor, further by move posture or adjustment terminal posture come pair Just.However, the adjustment of this view-based access control model, with certain uncertainty.If the face of collected object to be checked is not in Pose is faced, also has biggish error in subsequent data processing.
It is worth noting that if subsequent needs come to by three dimensional point cloud Input matrix convolutional neural networks to be checked right As carrying out living body classification (identifying whether object to be checked is living body), need to guarantee in the input sample of multiplicity the standard of Classification and Identification Exactness.This requires the three dimensional point cloud matrix of input to meet local invariant, and translation invariance reduces invariance, and rotation is not The characteristics such as denaturation.Which kind of, that is, no matter collecting three dimensional point cloud matrix under pose, need to be converted into its pose favorably The pose of data processing is carried out in convolutional neural networks, for example, characterization face is in the pose of positive angle, to guarantee convolution mind Accuracy through network processes.
In a kind of optional embodiment, the spatial alternation mould that the three-dimensional point cloud Input matrix is trained in advance Type, the three dimensional point cloud matrix after being corrected, comprising: by the three-dimensional coordinate information in the three dimensional point cloud matrix, Input space transform models trained in advance obtain space conversion matrices;Wherein, the space transform models are pre- by being not in If the sample three dimensional point cloud matrix coordinate distribution characteristics of pose and the sample three dimensional point cloud matrix are through artificial school Coordinate distribution characteristics training after being in the default pose obtains;By the space conversion matrices to the three-dimensional point cloud Data matrix carries out coordinates correction, the three dimensional point cloud matrix after being corrected.
Specifically, pose indicates the position and direction of object to be checked.Space where three-dimensional camera mould group is established three-dimensional Coordinate system (x, y, z).Wherein, the position of object to be checked can indicate that the direction of object to be checked can be by enclosing by (x, y, z) It is indicated around the rotation angle (α, beta, gamma) of each reference axis.Pose correction is carried out for three dimensional point cloud matrix, it can be with needle The measurements of the chest, waist and hips coordinate position of wherein every bit is adjusted, and then adjusts the position and direction of object entirety to be checked.
It is described that coordinates correction is carried out to the three dimensional point cloud matrix by the space conversion matrices, after obtaining correction Three dimensional point cloud matrix, including by following formula to the three dimensional point cloud matrix carry out coordinates correction:
Wherein,For input quantity;Indicate the three-dimensional coordinate of three dimensional point cloud matrix before correcting; AθIndicate the space conversion matrices,It is indicated for the expansion of the space conversion matrices; For output quantity;Indicate the three-dimensional coordinate of three dimensional point cloud matrix after correcting.
S13, according to the three dimensional point cloud matrix after the correction, and In vivo detection model trained in advance, it determines Whether the object to be checked is living body.
It optionally, can three dimensional point cloud by convolutional neural networks model trained in advance, after the correction Matrix extracts eigenmatrix;The function model that the eigenmatrix is inputted to preset classification obtains the fused face Figure three dimensional point cloud matrix belongs to the probability value of living body class;If the probability value is greater than predetermined probabilities threshold value, it is determined that described Object to be checked is living body.
Convolutional neural networks are a kind of multilayer neural network, including convolutional layer and pond layer etc..Artificial neuron can ring Surrounding cells are answered, the continuous dimensionality reduction of the huge problem of image recognition of data volume can be finally trained to, realize classification, it is fixed Position, functions and the effect such as detection.In addition, convolutional neural networks can be trained to.The training method of convolutional neural networks needs benefit Derivation, i.e. the backpropagation rule of gradient decline and chain type derivation are carried out with node of the chain type Rule for derivation to hidden layer.
It is that the composition of neural network that the embodiment of the present disclosure provides a kind of is shown as shown in the convolutional neural networks schematic diagram of Fig. 3 It is intended to, that is, by input layer (input layer) through hidden layer (hidden layer) to output layer (output layer).Its In, hidden layer includes many different layers (convolutional layer, pond layer, activation primitive layer, full articulamentum etc.).
Feature extraction is completed by convolutional layer down-sampling, generates eigenmatrix.The disclosure as shown in Figure 4 is please referred to implement The operation schematic diagram that example provides.
Firstly, above-mentioned three dimensional point cloud matrix can be the input matrix of 7*7 in figure, the every bit in matrix is source image Plain (source pixel).By the filter window (filter) of a 3*3, the input matrix is traversed, and carry out convolution algorithm, Obtain the value exported after convolution algorithm.Wherein, the filter window also known as convolution matrix (convolution kernel).
Specific convolution algorithm process may refer to as the figure upper right corner it is vertical shown in, operation result is -8.
Central value (pixel that value is 1) in the window matrix of Fig. 4 input matrix center choosing is convolved the knot after operation Fruit is substituted, that is, is worth and is substituted by -8 pixel.
It is worth noting that above-mentioned calculating process may according to actual needs iteration it is multiple, until generate needed for Eigenmatrix.
The function model of the preset classification is Softmax classification function.Specifically, the eigenmatrix is inputted In Softmax classification function model, the output quantity for obtaining multiple neurons can be mapped in 0~1 section.Namely It says, blending image is input to Softmax layers after extracting eigenmatrix by convolutional neural networks, and final output is corresponding every classification Probability value.
It is converted specifically, doing full articulamentum to described image eigenmatrix, the multidimensional characteristic vectors exported, wherein The dimension number of the multidimensional characteristic vectors corresponds to the class number of the Softmax classification function.
After the convolution algorithm of convolutional neural networks, the output of each neuron can be used following formula to express:
Wherein, xijIt is j-th of input value of i-th of neuron;wijIt is j-th of weight of i-th of neuron, b is offset Value, ziIndicate i-th of value in i-th of the output namely the multidimensional characteristic vectors of the network.
Further, determination obtains the probability value that the fused face-image belongs to living body class according to the following formula:
Wherein, aiRepresent the probability value of i-th of classification of Softmax, ziFor i-th of value in the multidimensional characteristic vectors.
For example, the tool of Softmax classification function is " non-living body " there are two classification, first classification, corresponding general Rate value is a1;Second classification is " living body ", and corresponding probability value is a2.So by above-mentioned probability calculation formula, can be obtained The fused face-image belongs to the probability value a of living body class2
Above-mentioned skill scheme, can at least reach following technical effect:
Obtain the three dimensional point cloud matrix of subjects face to be checked by three-dimensional camera mould group, then by the three-dimensional point cloud square Battle array input space transform models trained in advance, the three dimensional point cloud matrix after being corrected, further, according to the school Three dimensional point cloud matrix after just, and In vivo detection model trained in advance determine whether the object to be checked is living body, Wherein, three dimensional point cloud matrix is directly acquired by three-dimensional camera mould group, and then subsequent based on the inspection of three dimensional point cloud matrix Living body is tested, loss of data can be reduced compared with the intermediate treatment link of the reduction data of degree, guarantee subsequent In vivo detection model Input the accuracy of information.Above-mentioned three-dimensional camera mould group, which is once shot, can obtain three dimensional point cloud matrix, data acquisition It is more efficient, the data input time of user is reduced, more fast, furthermore it is possible to the not noticeable property of face In vivo detection is promoted, It can more widely be applied to take precautions against illegal face mould attack.Also, with the simple side for carrying out In vivo detection using depth map Case is compared, and the parameter information that may include in point cloud is more, significantly more efficient can prevent the attack of 3D face mould.
Fig. 5 is a kind of method for test alive shown according to an exemplary embodiment, which comprises
S51 obtains the three dimensional point cloud matrix of subjects face to be checked by three-dimensional camera mould group, wherein the three-dimensional Point cloud data matrix includes the three-dimensional coordinate information and image parameters information of each sampled point of subjects face to be checked.
In the specific implementation, the image parameters information includes following arbitrary parameter: color parameter, reflected intensity parameter, Temperature parameter.That is, it also may include this that each point, which includes three-dimensional coordinate, in the point cloud of the face collected The collected color parameter value of point, reflection intensity values, temperature parameter value.For example, obtain points be N, every bit include (x, y, Z) measurements of the chest, waist and hips coordinate data (3 dimensions), further includes M kind parameter.So in fact, obtained three dimensional point cloud matrix is N* The matrix of (3+M).
In a kind of optional embodiment, in the three-dimensional point for obtaining subjects face to be checked by three-dimensional camera mould group Before cloud data matrix, the method also includes: the subjects face to be checked and the three-dimensional phase are determined by range sensor The distance of machine mould group;If the distance is less than pre-determined distance threshold value, the three of subjects face to be checked is obtained by three-dimensional camera mould group Tie up point cloud data matrix;If the distance is not less than the pre-determined distance threshold value, it is close that sending is used to indicate the object to be checked The prompt information of the three-dimensional camera mould group.
And then object to be checked can shorten the distance between three-dimensional camera mould group according to instruction, can make three-dimensional phase in this way Machine mould group obtains more accurate three dimensional point cloud matrix.
S52, the space transform models that the three-dimensional point cloud Input matrix is trained in advance, the three-dimensional point cloud after being corrected Data matrix, wherein the space transform models are by being not in the coordinate of the sample three dimensional point cloud matrix of default pose Distribution characteristics and the sample three dimensional point cloud matrix are distributed through manual synchronizing to coordinate when being in the default pose Feature training obtains.
Object to be checked is often through the relative position between range estimation and three-dimensional camera mould group, to assess itself and three-dimensional camera Positional relationship in mould group between each sensor, further by move posture or adjustment terminal posture come pair Just.However, the adjustment of this view-based access control model, with certain uncertainty.If the face of collected object to be checked is not in Pose is faced, also has biggish error in subsequent data processing.
It is worth noting that if subsequent needs come to by three dimensional point cloud Input matrix convolutional neural networks to be checked right As carrying out living body classification (identifying whether object to be checked is living body), need to guarantee in the input sample of multiplicity the standard of Classification and Identification Exactness.This requires the three dimensional point cloud matrix of input to meet local invariant, and translation invariance reduces invariance, and rotation is not The characteristics such as denaturation.Which kind of, that is, no matter collecting three dimensional point cloud matrix under pose, need to be converted into its pose favorably The pose of data processing is carried out in convolutional neural networks, for example, characterization face is in the pose of positive angle, to guarantee convolution mind Accuracy through network processes.
In a kind of optional embodiment, the spatial alternation mould that the three-dimensional point cloud Input matrix is trained in advance Type, the three dimensional point cloud matrix after being corrected, comprising: by the three-dimensional coordinate information in the three dimensional point cloud matrix, Input space transform models trained in advance obtain space conversion matrices;Wherein, the space transform models are pre- by being not in If the sample three dimensional point cloud matrix coordinate distribution characteristics of pose and the sample three dimensional point cloud matrix are through artificial school Coordinate distribution characteristics training after being in the default pose obtains;By the space conversion matrices to the three-dimensional point cloud Data matrix carries out coordinates correction, the three dimensional point cloud matrix after being corrected.
Specifically, pose indicates the position and direction of object to be checked.Space where three-dimensional camera mould group is established three-dimensional Coordinate system (x, y, z).Wherein, the position of object to be checked can indicate that the direction of object to be checked can be by enclosing by (x, y, z) It is indicated around the rotation angle (α, beta, gamma) of each reference axis.Pose correction is carried out for three dimensional point cloud matrix, it can be with needle The measurements of the chest, waist and hips coordinate position of wherein every bit is adjusted, and then adjusts the position and direction of object entirety to be checked.
It is described that coordinates correction is carried out to the three dimensional point cloud matrix by the space conversion matrices, after obtaining correction Three dimensional point cloud matrix, including by following formula to the three dimensional point cloud matrix carry out coordinates correction:
Wherein,For input quantity;Indicate the three-dimensional coordinate of three dimensional point cloud matrix before correcting; AθIndicate the space conversion matrices,It is indicated for the expansion of the space conversion matrices; For output quantity;Indicate the three-dimensional coordinate of three dimensional point cloud matrix after correcting.
S53 is extracted by convolutional neural networks model trained in advance from the three dimensional point cloud matrix after the correction Eigenmatrix.
S54 calculates the variance yields of the eigenmatrix.
Optionally, the variance yields that the eigenmatrix is calculated, including be calculated by the following formula to obtain variance Value σ2:
Wherein, X indicates the either element in the eigenmatrix;μ indicates the mean value of all elements in the eigenmatrix; N indicates the sum of the element in the eigenmatrix.
S55, judges whether the variance yields is located within the scope of preset variance, wherein the preset variance range is It is determined according to the variance yields that the eigenmatrix of corresponding biopsy sample face is calculated.
S56 determines that the object to be checked is living body if the variance yields is located within the scope of preset variance.
It is worth noting that the variance yields location of eigenmatrix judges whether object to be checked is living body, it can Reduce data processing amount, so as to be more quickly obtained whether be living body inspection result.
Fig. 6 is a kind of method for test alive shown according to an exemplary embodiment, which comprises
S61 obtains the three dimensional point cloud matrix of subjects face to be checked by three-dimensional camera mould group, wherein the three-dimensional Point cloud data matrix includes the three-dimensional coordinate information and image parameters information of each sampled point of subjects face to be checked.
S62, the space transform models that the three-dimensional point cloud Input matrix is trained in advance, the three-dimensional point cloud after being corrected Data matrix, wherein the space transform models are by being not in the coordinate of the sample three dimensional point cloud matrix of default pose Distribution characteristics and the sample three dimensional point cloud matrix are distributed through manual synchronizing to coordinate when being in the default pose Feature training obtains.
S63 is extracted by convolutional neural networks model trained in advance from the three dimensional point cloud matrix after the correction Eigenmatrix.
S64 calculates the variance yields of the eigenmatrix.
The eigenmatrix is inputted the function model of preset classification by S65, and it is three-dimensional to obtain the fused face figure Point cloud data matrix belongs to the probability value of living body class.
S66, if the variance yields is located within the scope of preset variance, and the probability value is greater than predetermined probabilities threshold value, then Determine that the object to be checked is living body, wherein the preset variance range is the feature square according to corresponding biopsy sample face What the variance yields that battle array is calculated determined.
In the technical scheme of this embodiment, pass through the side of the probability value and eigenmatrix that obtain according to classification function Difference, in this way, different types of data result can realize secondary verification, promotes living body to judge whether object to be checked is living body The accuracy and safety of detection.
It is worth noting that for simple description, therefore, it is stated as a series of dynamic for above method embodiment It combines, but those skilled in the art should understand that, the present invention is not limited by the sequence of acts described.For example, step Rapid S64 and step S65 can successively be executed with random order, can also be executed parallel.Secondly, those skilled in the art also should Know, the embodiments described in the specification are all preferred embodiments, and not necessarily the present invention must for related movement Must.For example, step S62 can not be executed, and in step S63, by convolutional neural networks model trained in advance, from first Begin the three dimensional point cloud matrix obtained extraction eigenmatrix.In the example present, pass through number different types of in subsequent step Secondary verification can be achieved according to result, be also able to achieve the effect for promoting accuracy and the safety of In vivo detection.
Fig. 7 is a kind of device block diagram for test alive shown according to an exemplary embodiment.Described device includes:
Point cloud obtains module 710, for obtaining the three dimensional point cloud square of subjects face to be checked by three-dimensional camera mould group Battle array, wherein the three dimensional point cloud matrix include each sampled point of subjects face to be checked three-dimensional coordinate information and Image parameters information;
Point cloud rectification module 720, the space transform models for training the three-dimensional point cloud Input matrix in advance obtain Three dimensional point cloud matrix after correction, wherein the space transform models are by being not in the sample three-dimensional point of default pose The coordinate distribution characteristics of cloud data matrix and the sample three dimensional point cloud matrix are through manual synchronizing to being in described default Coordinate distribution characteristics training when pose obtains;
Determining module 730, for according to the three dimensional point cloud matrix after the correction, and living body inspection trained in advance Model is surveyed, determines whether the object to be checked is living body.
Above-mentioned skill scheme, can at least reach following technical effect:
Obtain the three dimensional point cloud matrix of subjects face to be checked by three-dimensional camera mould group, then by the three-dimensional point cloud square Battle array input space transform models trained in advance, the three dimensional point cloud matrix after being corrected, further, according to the school Three dimensional point cloud matrix after just, and In vivo detection model trained in advance determine whether the object to be checked is living body, Wherein, three dimensional point cloud matrix is directly acquired by three-dimensional camera mould group, and then subsequent based on the inspection of three dimensional point cloud matrix Living body is tested, loss of data can be reduced compared with the intermediate treatment link of the reduction data of degree, guarantee subsequent In vivo detection model Input the accuracy of information.Above-mentioned three-dimensional camera mould group, which is once shot, can obtain three dimensional point cloud matrix, data acquisition It is more efficient, the data input time of user is reduced, more fast, furthermore it is possible to the not noticeable property of face In vivo detection is promoted, It can more widely be applied to take precautions against illegal face mould attack.Also, with the simple side for carrying out In vivo detection using depth map Case is compared, and the parameter information that may include in point cloud is more, significantly more efficient can prevent the attack of 3D face mould.
Optionally, the In vivo detection model is convolutional neural networks model, and the determining module is used for:
By convolutional neural networks model trained in advance, feature is extracted from the three dimensional point cloud matrix after the correction Matrix;
Calculate the variance yields of the eigenmatrix;
Judge whether the variance yields is located within the scope of preset variance, wherein the preset variance range is basis What the variance yields that the eigenmatrix of corresponding biopsy sample face is calculated determined;
If the variance yields is located within the scope of preset variance, determine that the object to be checked is living body.
Optionally, the determining module obtains variance yields σ for being calculated by the following formula2:
Wherein, X indicates the either element in the eigenmatrix;μ indicates the mean value of all elements in the eigenmatrix; N indicates the sum of the element in the eigenmatrix.
Optionally, described cloud rectification module, is used for:
By the three-dimensional coordinate information in the three dimensional point cloud matrix, input space transform models trained in advance are obtained Space conversion matrices;Wherein, the space transform models are sat by being not in the sample three dimensional point cloud matrix of default pose Mark the coordinate distribution of distribution characteristics and the sample three dimensional point cloud matrix after manual synchronizing is in the default pose Feature training obtains;
Coordinates correction is carried out to the three dimensional point cloud matrix by the space conversion matrices, three after being corrected Tie up point cloud data matrix.
Optionally, the default pose is the pose that characterization face is in positive angle.
Optionally, described cloud rectification module, for being sat by following formula to the three dimensional point cloud matrix Calibration is being just:
Wherein,For input quantity;Indicate the three-dimensional coordinate of three dimensional point cloud matrix before correcting; AθIndicate the space conversion matrices,It is indicated for the expansion of the space conversion matrices; For output quantity;Indicate the three-dimensional coordinate of three dimensional point cloud matrix after correcting.
Optionally, the image parameters information includes following arbitrary parameter: color parameter, reflected intensity parameter, temperature ginseng Number.
Optionally, described device further includes cue module, for obtaining object to be checked by three-dimensional camera mould group described Before the three dimensional point cloud matrix of face, following operation is executed:
Determine the subjects face to be checked at a distance from the three-dimensional camera mould group by range sensor;
If the distance is less than pre-determined distance threshold value, the three-dimensional point cloud of subjects face to be checked is obtained by three-dimensional camera mould group Data matrix;
If the distance is not less than the pre-determined distance threshold value, sending is used to indicate the object to be checked close to the three-dimensional The prompt information of camera mould group.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method Embodiment in be described in detail, no detailed explanation will be given here.
The disclosure provides a kind of computer readable storage medium, is stored thereon with computer program, and the program is by processor The step of method that test alive is used for described in any one is realized when execution.
The disclosure provides a kind of electronic equipment, comprising:
Memory is stored thereon with computer program;
Processor, for executing the computer program in the memory, to realize described in any one for living body The step of method of inspection.
Fig. 8 is a kind of electronic equipment block diagram shown according to an exemplary embodiment.The electronic equipment may be provided as Smart phone, Intelligent flat equipment, wearable device, Personal Finance terminal etc..As shown in figure 8, the electronic equipment 800 can wrap It includes: processor 801, memory 802.The electronic equipment 800 can also include multimedia component 803, and input/output (I/O) connects Mouth one or more of 804 and communication component 805.
Wherein, processor 801 is used to control the integrated operation of the electronic equipment 800, is examined with the living body that is used for for completing above-mentioned All or part of the steps in the method for survey.Memory 802 is for storing various types of data to support in the electronic equipment 800 operation, these data for example may include any application or method for operating on the electronic equipment 800 Instruction and the relevant data of application program, for example, test alive model data, point cloud parameter etc., for another example, contact number According to, the message of transmitting-receiving, picture, audio, video etc..The memory 802 can be by any kind of volatibility or non-volatile It stores equipment or their combination is realized, such as static random access memory (Static Random Access Memory, abbreviation SRAM), electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory, abbreviation EEPROM), Erasable Programmable Read Only Memory EPROM (Erasable Programmable Read-Only Memory, abbreviation EPROM), programmable read only memory (Programmable Read-Only Memory, letter Claim PROM), read-only memory (Read-Only Memory, abbreviation ROM), magnetic memory, flash memory, disk or CD. Multimedia component 803 may include screen and audio component.Wherein screen for example can be touch screen, and audio component is for exporting And/or input audio signal.For example, audio component may include a microphone, microphone is for receiving external audio signal. The received audio signal can be further stored in memory 802 or be sent by communication component 805.Audio component also wraps At least one loudspeaker is included, output audio signal is used for.I/O interface 804 provides between processor 801 and other interface modules Interface, other above-mentioned interface modules can be keyboard, mouse, button etc..These buttons can be virtual push button or entity is pressed Button.Communication component 805 is for carrying out wired or wireless communication between the electronic equipment 800 and other equipment.Wireless communication, such as Wi-Fi, bluetooth, near-field communication (Near Field Communication, abbreviation NFC), 2G, 3G, 4G, NB-IOT, eMTC or Other 5G etc. or they one or more of combination, it is not limited here.Therefore the corresponding communication component 808 can To include: Wi-Fi module, bluetooth module, NFC module etc..
In one exemplary embodiment, electronic equipment 800 can be by one or more application specific integrated circuit (Application Specific Integrated Circuit, abbreviation ASIC), digital signal processor (Digital Signal Processor, abbreviation DSP), digital signal processing appts (Digital Signal Processing Device, Abbreviation DSPD), programmable logic device (Programmable Logic Device, abbreviation PLD), field programmable gate array (Field Programmable Gate Array, abbreviation FPGA), controller, microcontroller, microprocessor or other electronics member Part is realized, for executing the above-mentioned method for In vivo detection.
In a further exemplary embodiment, a kind of computer readable storage medium including program instruction is additionally provided, it should The step of above-mentioned method for In vivo detection is realized when program instruction is executed by processor.For example, this computer-readable is deposited Storage media can be the above-mentioned memory 802 including program instruction, and above procedure instruction can be by the processor of electronic equipment 800 801 execute to complete the above-mentioned method for In vivo detection.
The preferred embodiment of the disclosure is described in detail in conjunction with attached drawing above, still, the disclosure is not limited to above-mentioned reality The detail in mode is applied, in the range of the technology design of the disclosure, a variety of letters can be carried out to the technical solution of the disclosure Monotropic type, these simple variants belong to the protection scope of the disclosure.
It is further to note that specific technical features described in the above specific embodiments, in not lance In the case where shield, can be combined in any appropriate way, in order to avoid unnecessary repetition, the disclosure to it is various can No further explanation will be given for the combination of energy.
In addition, any combination can also be carried out between a variety of different embodiments of the disclosure, as long as it is without prejudice to originally Disclosed thought equally should be considered as disclosure disclosure of that.

Claims (11)

1. a kind of method for test alive, which is characterized in that the described method includes:
The three dimensional point cloud matrix of subjects face to be checked is obtained by three-dimensional camera mould group, wherein the three dimensional point cloud Matrix includes the three-dimensional coordinate information and image parameters information of each sampled point of subjects face to be checked;
The space transform models that the three-dimensional point cloud Input matrix is trained in advance, the three dimensional point cloud square after being corrected Battle array, wherein the space transform models are distributed spy by being not in the coordinate of the sample three dimensional point cloud matrix of default pose Sign and the sample three dimensional point cloud matrix are instructed through manual synchronizing to coordinate distribution characteristics when being in the default pose It gets;
According to the three dimensional point cloud matrix after the correction, and In vivo detection model trained in advance, it determines described to be checked Whether object is living body.
2. the method according to claim 1, wherein the In vivo detection model is convolutional neural networks model; The three dimensional point cloud matrix according to after the correction, and In vivo detection model trained in advance, determine described to be checked Whether object is living body, comprising:
By convolutional neural networks model trained in advance, feature square is extracted from the three dimensional point cloud matrix after the correction Battle array;
Calculate the variance yields of the eigenmatrix;
Judge whether the variance yields is located within the scope of preset variance, wherein the preset variance range is according to correspondence What the variance yields that the eigenmatrix of biopsy sample face is calculated determined;
If the variance yields is located within the scope of preset variance, determine that the object to be checked is living body.
3. according to the method described in claim 2, it is characterized in that, the variance yields that the eigenmatrix is calculated, packet It includes:
It is calculated by the following formula to obtain variance yields σ2:
Wherein, X indicates the either element in the eigenmatrix;μ indicates the mean value of all elements in the eigenmatrix;N table Show the element sum in the eigenmatrix.
4. the method according to claim 1, wherein it is described by the three-dimensional point cloud Input matrix in advance training Space transform models, the three dimensional point cloud matrix after being corrected, comprising:
By the three-dimensional coordinate information in the three dimensional point cloud matrix, input space transform models trained in advance obtain space Transformation matrix;Wherein, the space transform models are divided by being not in the sample three dimensional point cloud matrix coordinate of default pose The coordinate distribution characteristics of cloth feature and the sample three dimensional point cloud matrix after manual synchronizing is in the default pose Training obtains;
Coordinates correction is carried out to the three dimensional point cloud matrix by the space conversion matrices, the three-dimensional point after being corrected Cloud data matrix.
5. according to the method described in claim 4, it is characterized in that, the default pose is characterization face in positive angle Pose.
6. according to the method described in claim 4, it is characterized in that, it is described by the space conversion matrices to the three-dimensional point Cloud data matrix carries out coordinates correction, the three dimensional point cloud matrix after being corrected, comprising:
Coordinates correction is carried out to the three dimensional point cloud matrix by following formula:
Wherein,For input quantity;Indicate the three-dimensional coordinate of three dimensional point cloud matrix before correcting;AθTable Show the space conversion matrices,It is indicated for the expansion of the space conversion matrices;It is defeated Output;Indicate the three-dimensional coordinate of three dimensional point cloud matrix after correcting.
7. method according to claim 1-6, which is characterized in that the image parameters information includes following any Parameter:
Color parameter, reflected intensity parameter, temperature parameter.
8. method according to claim 1-5, which is characterized in that it is described by three-dimensional camera mould group obtain to Before the three dimensional point cloud matrix for examining subjects face, the method also includes:
Determine the subjects face to be checked at a distance from the three-dimensional camera mould group by range sensor;
If the distance is less than pre-determined distance threshold value, the three dimensional point cloud of subjects face to be checked is obtained by three-dimensional camera mould group Matrix;
If the distance is not less than the pre-determined distance threshold value, sending is used to indicate the object to be checked close to the three-dimensional camera The prompt information of mould group.
9. a kind of device for test alive, which is characterized in that described device includes:
Point cloud obtains module, for obtaining the three dimensional point cloud matrix of subjects face to be checked by three-dimensional camera mould group, wherein The three dimensional point cloud matrix includes the three-dimensional coordinate information and image parameters of each sampled point of subjects face to be checked Information;
Point cloud rectification module, the space transform models for training the three-dimensional point cloud Input matrix in advance, after obtaining correction Three dimensional point cloud matrix, wherein the space transform models are by being not in the sample three dimensional point cloud of default pose The coordinate distribution characteristics of matrix and the sample three dimensional point cloud matrix are through manual synchronizing to when being in the default pose Coordinate distribution characteristics training obtain;
Determining module, for according to the three dimensional point cloud matrix after the correction, and In vivo detection model trained in advance, Determine whether the object to be checked is living body.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor The step of any one of claim 1-8 the method is realized when execution.
11. a kind of electronic equipment characterized by comprising
Memory is stored thereon with computer program;
Processor, for executing the computer program in the memory, to realize described in any one of claim 1-8 The step of method.
CN201910239825.1A 2019-03-27 2019-03-27 Method and apparatus for in vivo testing, electronic device, and storage medium Active CN110059579B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910239825.1A CN110059579B (en) 2019-03-27 2019-03-27 Method and apparatus for in vivo testing, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910239825.1A CN110059579B (en) 2019-03-27 2019-03-27 Method and apparatus for in vivo testing, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN110059579A true CN110059579A (en) 2019-07-26
CN110059579B CN110059579B (en) 2020-09-04

Family

ID=67317460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910239825.1A Active CN110059579B (en) 2019-03-27 2019-03-27 Method and apparatus for in vivo testing, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN110059579B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160233A (en) * 2019-12-27 2020-05-15 中国科学院苏州纳米技术与纳米仿生研究所 Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance
CN112102506A (en) * 2020-09-25 2020-12-18 北京百度网讯科技有限公司 Method, device and equipment for acquiring sampling point set of object and storage medium
CN112102496A (en) * 2020-09-27 2020-12-18 安徽省农业科学院畜牧兽医研究所 Cattle physique measuring method, model training method and system
CN112308722A (en) * 2019-07-28 2021-02-02 四川谦泰仁投资管理有限公司 Aquaculture insurance declaration request verification system based on infrared camera shooting
CN113970922A (en) * 2020-07-22 2022-01-25 商汤集团有限公司 Point cloud data processing method and intelligent driving control method and device

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1426760A (en) * 2001-12-18 2003-07-02 中国科学院自动化研究所 Identity discriminating method based on living body iris
CN105184246A (en) * 2015-08-28 2015-12-23 北京旷视科技有限公司 Living body detection method and living body detection system
CN105260731A (en) * 2015-11-25 2016-01-20 商汤集团有限公司 Human face living body detection system and method based on optical pulses
CN105320939A (en) * 2015-09-28 2016-02-10 北京天诚盛业科技有限公司 Iris biopsy method and apparatus
CN105320947A (en) * 2015-11-04 2016-02-10 博宏信息技术有限公司 Face in-vivo detection method based on illumination component
CN105550668A (en) * 2016-01-25 2016-05-04 东莞市中控电子技术有限公司 Apparatus for collecting biological features of living body and method for identifying biological features of living body
CN105740778A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Improved three-dimensional human face in-vivo detection method and device thereof
CN105740775A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Three-dimensional face living body recognition method and device
CN105740780A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Method and device for human face in-vivo detection
CN105844206A (en) * 2015-01-15 2016-08-10 北京市商汤科技开发有限公司 Identity authentication method and identity authentication device
CN106778518A (en) * 2016-11-24 2017-05-31 汉王科技股份有限公司 A kind of human face in-vivo detection method and device
US20180173980A1 (en) * 2016-12-15 2018-06-21 Beijing Kuangshi Technology Co., Ltd. Method and device for face liveness detection
CN108280418A (en) * 2017-12-12 2018-07-13 北京深醒科技有限公司 The deception recognition methods of face image and device
CN108319901A (en) * 2018-01-17 2018-07-24 百度在线网络技术(北京)有限公司 Biopsy method, device, computer equipment and the readable medium of face
CN108764091A (en) * 2018-05-18 2018-11-06 北京市商汤科技开发有限公司 Biopsy method and device, electronic equipment and storage medium
US20180349682A1 (en) * 2017-05-31 2018-12-06 Facebook, Inc. Face liveness detection
CN109034029A (en) * 2018-07-17 2018-12-18 新疆玖富万卡信息技术有限公司 Detect face identification method, readable storage medium storing program for executing and the electronic equipment of living body

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1426760A (en) * 2001-12-18 2003-07-02 中国科学院自动化研究所 Identity discriminating method based on living body iris
CN105844206A (en) * 2015-01-15 2016-08-10 北京市商汤科技开发有限公司 Identity authentication method and identity authentication device
CN105184246A (en) * 2015-08-28 2015-12-23 北京旷视科技有限公司 Living body detection method and living body detection system
CN105320939A (en) * 2015-09-28 2016-02-10 北京天诚盛业科技有限公司 Iris biopsy method and apparatus
CN105320947A (en) * 2015-11-04 2016-02-10 博宏信息技术有限公司 Face in-vivo detection method based on illumination component
CN105260731A (en) * 2015-11-25 2016-01-20 商汤集团有限公司 Human face living body detection system and method based on optical pulses
CN105740780A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Method and device for human face in-vivo detection
CN105740775A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Three-dimensional face living body recognition method and device
CN105740778A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Improved three-dimensional human face in-vivo detection method and device thereof
CN105550668A (en) * 2016-01-25 2016-05-04 东莞市中控电子技术有限公司 Apparatus for collecting biological features of living body and method for identifying biological features of living body
CN106778518A (en) * 2016-11-24 2017-05-31 汉王科技股份有限公司 A kind of human face in-vivo detection method and device
US20180173980A1 (en) * 2016-12-15 2018-06-21 Beijing Kuangshi Technology Co., Ltd. Method and device for face liveness detection
US20180349682A1 (en) * 2017-05-31 2018-12-06 Facebook, Inc. Face liveness detection
CN108280418A (en) * 2017-12-12 2018-07-13 北京深醒科技有限公司 The deception recognition methods of face image and device
CN108319901A (en) * 2018-01-17 2018-07-24 百度在线网络技术(北京)有限公司 Biopsy method, device, computer equipment and the readable medium of face
CN108764091A (en) * 2018-05-18 2018-11-06 北京市商汤科技开发有限公司 Biopsy method and device, electronic equipment and storage medium
CN109034029A (en) * 2018-07-17 2018-12-18 新疆玖富万卡信息技术有限公司 Detect face identification method, readable storage medium storing program for executing and the electronic equipment of living body

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ANDREA LAGORIO 等: "Liveness detection based on 3D face shape analysis", 《2013 INTERNATIONAL WORKSHOP ON BIOMETRICS AND FORENSICS (IWBF)》 *
K MOHAN 等: "A combined HOG-LPQ with Fuz-SVM classifier for Object face Liveness Detection", 《2017 INTERNATIONAL CONFERENCE ON I-SMAC (IOT IN SOCIAL, MOBILE, ANALYTICS AND CLOUD) (I-SMAC)》 *
SENGUR, A 等: "Deep Feature Extraction for Face Liveness Detection", 《2018 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND DATA PROCESSING (IDAP)》 *
吴继鹏 等: "基于FS-LBP特征的人脸活体检测方法", 《集美大学学报》 *
李兰影: "身份认证中的活性人脸检测技术的研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308722A (en) * 2019-07-28 2021-02-02 四川谦泰仁投资管理有限公司 Aquaculture insurance declaration request verification system based on infrared camera shooting
CN111160233A (en) * 2019-12-27 2020-05-15 中国科学院苏州纳米技术与纳米仿生研究所 Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance
CN111160233B (en) * 2019-12-27 2023-04-18 中国科学院苏州纳米技术与纳米仿生研究所 Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance
CN113970922A (en) * 2020-07-22 2022-01-25 商汤集团有限公司 Point cloud data processing method and intelligent driving control method and device
WO2022017131A1 (en) * 2020-07-22 2022-01-27 商汤集团有限公司 Point cloud data processing method and device, and intelligent driving control method and device
CN112102506A (en) * 2020-09-25 2020-12-18 北京百度网讯科技有限公司 Method, device and equipment for acquiring sampling point set of object and storage medium
CN112102506B (en) * 2020-09-25 2023-07-07 北京百度网讯科技有限公司 Acquisition method, device, equipment and storage medium for sampling point set of object
CN112102496A (en) * 2020-09-27 2020-12-18 安徽省农业科学院畜牧兽医研究所 Cattle physique measuring method, model training method and system
CN112102496B (en) * 2020-09-27 2024-03-26 安徽省农业科学院畜牧兽医研究所 Cattle physique measurement method, model training method and system

Also Published As

Publication number Publication date
CN110059579B (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN106780906B (en) A kind of testimony of a witness unification recognition methods and system based on depth convolutional neural networks
CN110059579A (en) For the method and apparatus of test alive, electronic equipment and storage medium
CN107748869B (en) 3D face identity authentication method and device
CN107609383B (en) 3D face identity authentication method and device
CN107633165B (en) 3D face identity authentication method and device
CN106897675B (en) Face living body detection method combining binocular vision depth characteristic and apparent characteristic
US10824849B2 (en) Method, apparatus, and system for resource transfer
CN110383288A (en) The method, apparatus and electronic equipment of recognition of face
CN105740778B (en) Improved three-dimensional human face in-vivo detection method and device
CN107316029B (en) A kind of living body verification method and equipment
CN103514440A (en) Facial recognition
WO2020088029A1 (en) Liveness detection method, storage medium, and electronic device
GB2560340A (en) Verification method and system
CN103514439A (en) Facial recognition
CN112232155B (en) Non-contact fingerprint identification method and device, terminal and storage medium
EP2148303A1 (en) Vein pattern management system, vein pattern registration device, vein pattern authentication device, vein pattern registration method, vein pattern authentication method, program, and vein data structure
CN112232163B (en) Fingerprint acquisition method and device, fingerprint comparison method and device, and equipment
CN110263768A (en) A kind of face identification method based on depth residual error network
CN112016525A (en) Non-contact fingerprint acquisition method and device
CN113205057A (en) Face living body detection method, device, equipment and storage medium
CN106709418A (en) Face identification method based on scene photo and identification photo and identification apparatus thereof
CN111382791B (en) Deep learning task processing method, image recognition task processing method and device
CN112232159A (en) Fingerprint identification method, device, terminal and storage medium
Betta et al. Face-based recognition techniques: proposals for the metrological characterization of global and feature-based approaches
WO2021046773A1 (en) Facial anti-counterfeiting detection method and apparatus, chip, electronic device and computer-readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant