CN110059579A - For the method and apparatus of test alive, electronic equipment and storage medium - Google Patents
For the method and apparatus of test alive, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN110059579A CN110059579A CN201910239825.1A CN201910239825A CN110059579A CN 110059579 A CN110059579 A CN 110059579A CN 201910239825 A CN201910239825 A CN 201910239825A CN 110059579 A CN110059579 A CN 110059579A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- dimensional point
- cloud data
- matrix
- data matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000012360 testing method Methods 0.000 title claims abstract description 24
- 239000011159 matrix material Substances 0.000 claims abstract description 274
- 238000001514 detection method Methods 0.000 claims abstract description 32
- 238000012937 correction Methods 0.000 claims abstract description 29
- 238000001727 in vivo Methods 0.000 claims abstract description 24
- 230000009466 transformation Effects 0.000 claims description 70
- 238000013527 convolutional neural network Methods 0.000 claims description 24
- 238000012545 processing Methods 0.000 claims description 15
- 238000012549 training Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 9
- 238000005070 sampling Methods 0.000 claims description 9
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 5
- 210000002569 neuron Anatomy 0.000 description 5
- 230000005236 sound signal Effects 0.000 description 4
- 238000009795 derivation Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 238000012795 verification Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000002349 favourable effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 238000011179 visual inspection Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000001574 biopsy Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 210000000554 iris Anatomy 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Image Analysis (AREA)
Abstract
Purpose of this disclosure is to provide a kind of method and apparatus for test alive, electronic equipment and storage medium, to solve the problems, such as that recognition of face test alive is not accurate enough in the related technology.It include: the three dimensional point cloud matrix that subjects face to be checked is obtained by three-dimensional camera mould group, wherein the three dimensional point cloud matrix includes the three-dimensional coordinate information and image parameters information of each sampled point of subjects face to be checked;The space transform models that the three-dimensional point cloud Input matrix is trained in advance, three dimensional point cloud matrix after being corrected, wherein, the space transform models are trained to coordinate distribution characteristics when being in the default pose through manual synchronizing by the coordinate distribution characteristics and the sample three dimensional point cloud matrix that are not in the sample three dimensional point cloud matrix of default pose and are obtained;According to the three dimensional point cloud matrix after the correction, and In vivo detection model trained in advance, determine whether the object to be checked is living body.
Description
Technical Field
The present disclosure relates to the field of data processing technology, and in particular, to a method and apparatus for in vivo testing, an electronic device, and a storage medium.
Background
With the development of science and technology, the data processing efficiency is improved, and the authentication mode of the legal identity changes day by day. In the related art, a scheme for performing user legal identity verification by collecting biometric features of a to-be-verified person is proposed, where the biometric features may be fingerprint features, or facial features of a person, or the like.
The human face is inherent like other biological characteristics (fingerprints, irises and the like) of a human body, and the uniqueness and good characteristic that the human face is not easily copied provide necessary preconditions for identity authentication. In contrast to other types of biometric recognition, face recognition is non-contact, i.e., the user does not need to be in direct contact with the device, and the device can capture images of the face. In addition, the face sorting, judging and identifying method can be used for sorting, judging and identifying a plurality of faces in an actual application scene.
However, with the increase of the application scenes of face recognition, face models such as lawless persons' benefit masks and 3D face models appear, or cases such as masquerading as legal users and illegally invading legal user accounts when face images of users are played. Recognizing whether a human face is a living body or not becomes an important component of face recognition.
Disclosure of Invention
An object of the present disclosure is to provide a method and apparatus for in-vivo testing, an electronic device, and a storage medium to solve the problem in the related art that face recognition in-vivo testing is not accurate enough.
To achieve the above object, in a first aspect, the present disclosure provides a method for in vivo testing, the method comprising:
acquiring a three-dimensional point cloud data matrix of a face of an object to be detected through a three-dimensional camera module, wherein the three-dimensional point cloud data matrix comprises three-dimensional coordinate information and image parameter information of each sampling point of the face of the object to be detected;
inputting the three-dimensional point cloud matrix into a pre-trained space transformation model to obtain a corrected three-dimensional point cloud data matrix, wherein the space transformation model is obtained by training the coordinate distribution characteristics of a sample three-dimensional point cloud data matrix which is not in a preset pose and the coordinate distribution characteristics of the sample three-dimensional point cloud data matrix which is manually corrected to be in the preset pose;
and determining whether the object to be detected is a living body according to the corrected three-dimensional point cloud data matrix and a pre-trained living body detection model.
Optionally, the in-vivo detection model is a convolutional neural network model; determining whether the object to be detected is a living body according to the corrected three-dimensional point cloud data matrix and a pre-trained living body detection model, wherein the determining comprises the following steps:
extracting a characteristic matrix from the corrected three-dimensional point cloud data matrix through a pre-trained convolutional neural network model;
calculating a variance value of the feature matrix;
judging whether the variance value is within a preset variance range, wherein the preset variance range is determined according to the variance value obtained by calculating the feature matrix of the face of the corresponding living body sample;
and if the variance value is within a preset variance range, determining that the object to be detected is a living body.
Optionally, the calculating to obtain the variance value of the feature matrix includes:
the variance value sigma is calculated by the following formula2:
Wherein X represents any element in the feature matrix; μ represents the mean of all elements in the feature matrix; n represents the total number of elements in the feature matrix.
Optionally, the inputting the three-dimensional point cloud matrix into a pre-trained spatial transformation model to obtain a corrected three-dimensional point cloud data matrix includes:
inputting three-dimensional coordinate information in the three-dimensional point cloud data matrix into a pre-trained space transformation model to obtain a space transformation matrix; the spatial transformation model is obtained by training a sample three-dimensional point cloud data matrix coordinate distribution characteristic which is not in a preset pose and a sample three-dimensional point cloud data matrix which is manually corrected to be in the preset pose;
and carrying out coordinate correction on the three-dimensional point cloud data matrix through the space transformation matrix to obtain a corrected three-dimensional point cloud data matrix.
Optionally, the preset pose is a pose representing that the face is at an angle of front view.
Optionally, the coordinate correction of the three-dimensional point cloud data matrix through the spatial transformation matrix to obtain a corrected three-dimensional point cloud data matrix includes:
coordinate correction is carried out on the three-dimensional point cloud data matrix through the following formula:
wherein,is an input quantity;representing three-dimensional coordinates of a three-dimensional point cloud data matrix before correction; a. theθRepresenting the spatial transformation matrix in a spatial transformation matrix,an expanded representation of the spatial transformation matrix;is the output quantity;and representing the three-dimensional coordinates of the corrected three-dimensional point cloud data matrix.
Optionally, the image parameter information includes any of the following parameters:
color parameters, reflection intensity parameters, temperature parameters.
Optionally, before the acquiring, by the three-dimensional camera module, the three-dimensional point cloud data matrix of the face of the object to be detected, the method further includes:
determining the distance between the face of the object to be detected and the three-dimensional camera module through a distance sensor;
if the distance is smaller than a preset distance threshold value, a three-dimensional point cloud data matrix of the face of the object to be detected is obtained through a three-dimensional camera module;
and if the distance is not smaller than the preset distance threshold value, sending prompt information for indicating that the object to be detected is close to the three-dimensional camera module.
In a second aspect, the present disclosure provides a device for in vivo testing, the device comprising:
the system comprises a point cloud acquisition module, a three-dimensional camera module and a data processing module, wherein the point cloud acquisition module is used for acquiring a three-dimensional point cloud data matrix of a face of an object to be detected through the three-dimensional camera module, and the three-dimensional point cloud data matrix comprises three-dimensional coordinate information and image parameter information of each sampling point of the face of the object to be detected;
the point cloud correction module is used for inputting the three-dimensional point cloud matrix into a pre-trained space transformation model to obtain a corrected three-dimensional point cloud data matrix, wherein the space transformation model is obtained by training the coordinate distribution characteristics of a sample three-dimensional point cloud data matrix which is not in a preset pose and the coordinate distribution characteristics of the sample three-dimensional point cloud data matrix which is manually corrected to be in the preset pose;
and the determining module is used for determining whether the object to be detected is a living body according to the corrected three-dimensional point cloud data matrix and a pre-trained living body detection model.
Optionally, the living body detection model is a convolutional neural network model, and the determining module is configured to:
extracting a characteristic matrix from the corrected three-dimensional point cloud data matrix through a pre-trained convolutional neural network model;
calculating a variance value of the feature matrix;
judging whether the variance value is within a preset variance range, wherein the preset variance range is determined according to the variance value obtained by calculating the feature matrix of the face of the corresponding living body sample;
and if the variance value is within a preset variance range, determining that the object to be detected is a living body.
Optionally, the determining module is configured to calculate the variance value σ according to the following formula2:
Wherein X represents any element in the feature matrix; μ represents the mean of all elements in the feature matrix; n represents the total number of elements in the feature matrix.
Optionally, the point cloud rectification module is configured to:
inputting three-dimensional coordinate information in the three-dimensional point cloud data matrix into a pre-trained space transformation model to obtain a space transformation matrix; the spatial transformation model is obtained by training a sample three-dimensional point cloud data matrix coordinate distribution characteristic which is not in a preset pose and a sample three-dimensional point cloud data matrix which is manually corrected to be in the preset pose;
and carrying out coordinate correction on the three-dimensional point cloud data matrix through the space transformation matrix to obtain a corrected three-dimensional point cloud data matrix.
Optionally, the preset pose is a pose representing that the face is at an angle of front view.
Optionally, the point cloud correction module is configured to perform coordinate correction on the three-dimensional point cloud data matrix through the following formula:
wherein,is an input quantity;representing three-dimensional coordinates of a three-dimensional point cloud data matrix before correction; a. theθRepresenting the spatial transformation matrix in a spatial transformation matrix,an expanded representation of the spatial transformation matrix;is the output quantity;and representing the three-dimensional coordinates of the corrected three-dimensional point cloud data matrix.
Optionally, the image parameter information includes any of the following parameters: color parameters, reflection intensity parameters, temperature parameters.
Optionally, the apparatus further includes a prompt module, configured to perform the following operations before the three-dimensional point cloud data matrix of the face of the object to be examined is acquired by the three-dimensional camera module:
determining the distance between the face of the object to be detected and the three-dimensional camera module through a distance sensor;
if the distance is smaller than a preset distance threshold value, a three-dimensional point cloud data matrix of the face of the object to be detected is obtained through a three-dimensional camera module;
and if the distance is not smaller than the preset distance threshold value, sending prompt information for indicating that the object to be detected is close to the three-dimensional camera module.
In a third aspect, the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of any of the methods for in vivo testing.
In a fourth aspect, the present disclosure provides an electronic device comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of any of the methods for in vivo testing.
The technical scheme can at least achieve the following technical effects:
the method comprises the steps of obtaining a three-dimensional point cloud data matrix of the face of an object to be detected through a three-dimensional camera module, inputting the three-dimensional point cloud matrix into a pre-trained space transformation model to obtain a corrected three-dimensional point cloud data matrix, and further determining whether the object to be detected is a living body according to the corrected three-dimensional point cloud data matrix and the pre-trained living body detection model, wherein the three-dimensional point cloud data matrix is directly obtained through the three-dimensional camera module, so that the living body is detected based on the three-dimensional point cloud data matrix subsequently, the intermediate processing link of data can be reduced to a greater extent, data loss is reduced, and the accuracy of input information of the subsequent living body detection model is ensured. The three-dimensional camera module can acquire a three-dimensional point cloud data matrix by shooting once, has higher data acquisition efficiency, reduces the information input time of a user, is quicker, can promote the uneasy perceptibility of human face living body detection, and can be widely applied to preventing illegal face attack. In addition, compared with a scheme of simply utilizing a depth map to carry out living body detection, the point cloud has more parameter information, and the attack of the 3D face model can be more effectively prevented.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a flow chart illustrating a method for in vivo testing according to an exemplary embodiment.
Fig. 2 is a schematic diagram illustrating an implementation scenario of a method for in vivo testing according to an exemplary embodiment.
Fig. 3 is a schematic diagram illustrating a principle according to an exemplary embodiment.
Fig. 4 is another schematic diagram illustrating an exemplary embodiment.
FIG. 5 is a flow chart illustrating another method for in vivo testing according to an exemplary embodiment.
FIG. 6 is a flow chart illustrating another method for in vivo testing according to an exemplary embodiment.
FIG. 7 is a block diagram illustrating an apparatus for in vivo testing in accordance with an exemplary embodiment.
FIG. 8 is a block diagram of an electronic device shown in accordance with an example embodiment.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
FIG. 1 is a flow chart illustrating a method for in vivo testing, the method comprising:
s11, acquiring a three-dimensional point cloud data matrix of the face of the object to be detected through a three-dimensional camera module, wherein the three-dimensional point cloud data matrix comprises three-dimensional coordinate information and image parameter information of each sampling point of the face of the object to be detected.
FIG. 2 is a schematic diagram of a three-dimensional point cloud data matrix for acquiring a face of an object to be inspected by a three-dimensional camera module. As shown in fig. 2, a three-dimensional camera module is mounted on the terminal. The three-dimensional Camera module comprises a Dot Projector, a Front Camera, a Receiver, a Ranging Sensor, an infrared Sensor and a Flood Illuminator. The structures in the modules are matched with each other to complete the projection of the dot matrix and the collection of point cloud data.
Specifically, a Flood Illuminator (Flood Illuminator) uses a low-power Vertical Cavity Surface Emitting Laser (VCSEL) to emit "unstructured" infrared light to be projected on a Surface of an object to be inspected. The Ranging Sensor (distance Sensor) uses low-power vertical resonant cavity surface emitting laser to emit infrared laser, and when an object approaches, the infrared laser can reflect laser light, so that the object to be detected can be further known to approach a terminal. Dot projectors emit Infrared laser light using high-power vertical cavity surface emitting lasers, generate about 3 ten thousand "Structured" light spots through structures such as Wafer Level Optics (WLO) and Diffractive Optical Elements (DOE) and project the light spots onto the surface of an object to be inspected, and reflect the light spots back to an Infrared camera (isolated camera) in an array formed by the light spots to calculate distances (depths) at different positions of the face.
In specific implementation, the image parameter information includes any of the following parameters: color parameters, reflection intensity parameters, temperature parameters. That is, each point in the point cloud of the collected face includes three-dimensional coordinates, and may also include a color parameter value, a reflection intensity value, and a temperature parameter value collected by the point. For example, the number of points obtained is N, and each point includes the three-dimensional coordinate data (3 dimensions) of (x, y, z), and also includes M kinds of parameters. In practice, the resulting three-dimensional point cloud data matrix is a matrix of N x (3+ M).
In an optional embodiment, before the acquiring the three-dimensional point cloud data matrix of the face of the object to be examined by the three-dimensional camera module, the method further includes: determining the distance between the face of the object to be detected and the three-dimensional camera module through a distance sensor; if the distance is smaller than a preset distance threshold value, a three-dimensional point cloud data matrix of the face of the object to be detected is obtained through a three-dimensional camera module; and if the distance is not smaller than the preset distance threshold value, sending prompt information for indicating that the object to be detected is close to the three-dimensional camera module.
And then wait to examine the object and can shorten and three-dimensional camera module distance between according to instructing, can make three-dimensional camera module like this acquire more accurate three-dimensional point cloud data matrix.
And S12, inputting the three-dimensional point cloud matrix into a pre-trained space transformation model to obtain a corrected three-dimensional point cloud data matrix, wherein the space transformation model is obtained by training the coordinate distribution characteristics of a sample three-dimensional point cloud data matrix which is not in a preset pose and the coordinate distribution characteristics of the sample three-dimensional point cloud data matrix which is manually corrected to be in the preset pose.
Often, the object to be detected evaluates the position relation between the object to be detected and each sensor in the three-dimensional camera module by visual inspection of the relative position between the object to be detected and the three-dimensional camera module, and then further aligns the object to be detected by moving the body posture or adjusting the terminal posture. However, this vision-based adjustment carries some uncertainty. If the collected face of the object to be detected is not in the front-view pose, a large error exists in subsequent data processing.
It should be noted that, if the three-dimensional point cloud data matrix needs to be input into the convolutional neural network to perform living body classification on the object to be detected (to identify whether the object to be detected is a living body), the accuracy of classification and identification needs to be ensured in various input samples. The input three-dimensional point cloud data matrix is required to meet the characteristics of local invariance, translation invariance, reduction invariance, rotation invariance and the like. That is, no matter what kind of pose the three-dimensional point cloud data matrix is acquired, the pose needs to be transformed to a pose which is favorable for the convolutional neural network to perform data processing, for example, the pose representing the face at the front view angle, so as to ensure the accuracy of the convolutional neural network processing.
In an optional embodiment, the inputting the three-dimensional point cloud matrix into a pre-trained spatial transformation model to obtain a rectified three-dimensional point cloud data matrix includes: inputting three-dimensional coordinate information in the three-dimensional point cloud data matrix into a pre-trained space transformation model to obtain a space transformation matrix; the spatial transformation model is obtained by training a sample three-dimensional point cloud data matrix coordinate distribution characteristic which is not in a preset pose and a sample three-dimensional point cloud data matrix which is manually corrected to be in the preset pose; and carrying out coordinate correction on the three-dimensional point cloud data matrix through the space transformation matrix to obtain a corrected three-dimensional point cloud data matrix.
The method comprises the steps of establishing a three-dimensional coordinate system (x, y, z) for a space where a three-dimensional camera module is located, wherein the position of the object to be detected can be represented by (x, y, z), and the direction of the object to be detected can be represented by a rotation angle (α, gamma) around each coordinate axis, carrying out pose correction on a three-dimensional point cloud data matrix, and adjusting the position and the direction of the whole object to be detected according to the three-dimensional coordinate position of each point.
The coordinate correction of the three-dimensional point cloud data matrix through the space transformation matrix to obtain a corrected three-dimensional point cloud data matrix comprises the following steps of performing coordinate correction on the three-dimensional point cloud data matrix through the following formula:
wherein,is an input quantity;representing three-dimensional coordinates of a three-dimensional point cloud data matrix before correction; a. theθRepresenting the spatial transformation matrix in a spatial transformation matrix,an expanded representation of the spatial transformation matrix;is the output quantity;and representing the three-dimensional coordinates of the corrected three-dimensional point cloud data matrix.
And S13, determining whether the object to be detected is a living body according to the corrected three-dimensional point cloud data matrix and a pre-trained living body detection model.
Optionally, a feature matrix may be extracted from the corrected three-dimensional point cloud data matrix through a pre-trained convolutional neural network model; inputting the feature matrix into a preset classified function model to obtain a probability value of the fused face image three-dimensional point cloud data matrix belonging to a living body class; and if the probability value is greater than a preset probability threshold value, determining that the object to be detected is a living body.
The convolutional neural network is a multi-layer neural network, and comprises convolutional layers, pooling layers and the like. The artificial neuron can respond to surrounding units, continuously reduces the dimension of the image recognition problem with huge data volume, and finally can be trained to realize the functions and effects of classification, positioning, detection and the like. In addition, convolutional neural networks may be trained. The training method of the convolutional neural network needs to utilize a chain derivation rule to conduct derivation on nodes of a hidden layer, namely a back propagation rule of gradient descent and chain derivation.
Fig. 3 is a schematic diagram of a convolutional neural network, which is a schematic diagram of a neural network provided by an embodiment of the present disclosure, that is, from an input layer (input layer) to an output layer (output layer) through a hidden layer (hidden layer). Wherein the hidden layer comprises a number of different layers (convolutional layers, pooling layers, activation function layers, fully connected layers, etc.).
Feature extraction is completed by convolutional layer down-sampling, and a feature matrix is generated. Please refer to fig. 4, which is a schematic diagram illustrating operations provided by the present disclosure.
First, the three-dimensional point cloud data matrix may be an input matrix of 7 × 7 in the figure, and each point in the matrix is a source pixel (source pixel). And traversing the input matrix by a 3-by-3 filter window (filter), and performing convolution operation to obtain a value output after the convolution operation. Wherein the filtering window is also called a convolution matrix (convolution kernel).
The specific convolution operation process can be seen in the vertical form at the upper right corner of the figure, and the operation result is-8.
The center value (pixel having a value of 1) in the window matrix framed in the input matrix of fig. 4 is replaced by the result of the convolution operation, that is, by the pixel having a value of-8.
It should be noted that the above calculation process may be repeated several times according to actual needs until the required feature matrix is generated.
The preset classified function model is a Softmax classification function. Specifically, the characteristic matrix is input into a Softmax classification function model, and the output quantity of the obtained multiple neurons can be mapped into an interval of 0-1. That is, the fusion image is input to the Softmax layer after extracting the feature matrix through the convolutional neural network, and finally, probability values corresponding to each category are output.
Specifically, full-connection layer transformation is performed on the image feature matrix to obtain an output multi-dimensional feature vector, wherein the dimension number of the multi-dimensional feature vector corresponds to the category number of the Softmax classification function.
After the convolution operation of the convolutional neural network, the output of each neuron can be expressed by the following formula:
wherein x isijIs the jth input value to the ith neuron; w is aijIs the jth weight of the ith neuron, b is an offset value, ziThe ith output of the network, i.e. the ith value in the multi-dimensional feature vector, is represented.
Further, the probability value of the face image after fusion belonging to the living body class is determined and obtained according to the following formula:
wherein, aiProbability value, z, representing the ith class of SoftmaxiIs the ith value in the multi-dimensional feature vector.
For example, the Softmax classification function has two categories, the first category being "non-living", the corresponding summaryValue of a1(ii) a The second category is "live", corresponding to a probability value of a2. Then, the probability value a of the face image after fusion belonging to the living body class can be obtained through the probability calculation formula2。
The technical scheme can at least achieve the following technical effects:
the method comprises the steps of obtaining a three-dimensional point cloud data matrix of the face of an object to be detected through a three-dimensional camera module, inputting the three-dimensional point cloud matrix into a pre-trained space transformation model to obtain a corrected three-dimensional point cloud data matrix, and further determining whether the object to be detected is a living body according to the corrected three-dimensional point cloud data matrix and the pre-trained living body detection model, wherein the three-dimensional point cloud data matrix is directly obtained through the three-dimensional camera module, so that the living body is detected based on the three-dimensional point cloud data matrix subsequently, the intermediate processing link of data can be reduced to a greater extent, data loss is reduced, and the accuracy of input information of the subsequent living body detection model is ensured. The three-dimensional camera module can acquire a three-dimensional point cloud data matrix by shooting once, has higher data acquisition efficiency, reduces the information input time of a user, is quicker, can promote the uneasy perceptibility of human face living body detection, and can be widely applied to preventing illegal face attack. In addition, compared with a scheme of simply utilizing a depth map to carry out living body detection, the point cloud has more parameter information, and the attack of the 3D face model can be more effectively prevented.
FIG. 5 is an illustration showing a method for in vivo testing, the method comprising:
s51, acquiring a three-dimensional point cloud data matrix of the face of the object to be detected through a three-dimensional camera module, wherein the three-dimensional point cloud data matrix comprises three-dimensional coordinate information and image parameter information of each sampling point of the face of the object to be detected.
In specific implementation, the image parameter information includes any of the following parameters: color parameters, reflection intensity parameters, temperature parameters. That is, each point in the point cloud of the collected face includes three-dimensional coordinates, and may also include a color parameter value, a reflection intensity value, and a temperature parameter value collected by the point. For example, the number of points obtained is N, and each point includes the three-dimensional coordinate data (3 dimensions) of (x, y, z), and also includes M kinds of parameters. In practice, the resulting three-dimensional point cloud data matrix is a matrix of N x (3+ M).
In an optional embodiment, before the acquiring the three-dimensional point cloud data matrix of the face of the object to be examined by the three-dimensional camera module, the method further includes: determining the distance between the face of the object to be detected and the three-dimensional camera module through a distance sensor; if the distance is smaller than a preset distance threshold value, a three-dimensional point cloud data matrix of the face of the object to be detected is obtained through a three-dimensional camera module; and if the distance is not smaller than the preset distance threshold value, sending prompt information for indicating that the object to be detected is close to the three-dimensional camera module.
And then wait to examine the object and can shorten and three-dimensional camera module distance between according to instructing, can make three-dimensional camera module like this acquire more accurate three-dimensional point cloud data matrix.
And S52, inputting the three-dimensional point cloud matrix into a pre-trained space transformation model to obtain a corrected three-dimensional point cloud data matrix, wherein the space transformation model is obtained by training the coordinate distribution characteristics of a sample three-dimensional point cloud data matrix which is not in a preset pose and the coordinate distribution characteristics of the sample three-dimensional point cloud data matrix which is manually corrected to be in the preset pose.
Often, the object to be detected evaluates the position relation between the object to be detected and each sensor in the three-dimensional camera module by visual inspection of the relative position between the object to be detected and the three-dimensional camera module, and then further aligns the object to be detected by moving the body posture or adjusting the terminal posture. However, this vision-based adjustment carries some uncertainty. If the collected face of the object to be detected is not in the front-view pose, a large error exists in subsequent data processing.
It should be noted that, if the three-dimensional point cloud data matrix needs to be input into the convolutional neural network to perform living body classification on the object to be detected (to identify whether the object to be detected is a living body), the accuracy of classification and identification needs to be ensured in various input samples. The input three-dimensional point cloud data matrix is required to meet the characteristics of local invariance, translation invariance, reduction invariance, rotation invariance and the like. That is, no matter what kind of pose the three-dimensional point cloud data matrix is acquired, the pose needs to be transformed to a pose which is favorable for the convolutional neural network to perform data processing, for example, the pose representing the face at the front view angle, so as to ensure the accuracy of the convolutional neural network processing.
In an optional embodiment, the inputting the three-dimensional point cloud matrix into a pre-trained spatial transformation model to obtain a rectified three-dimensional point cloud data matrix includes: inputting three-dimensional coordinate information in the three-dimensional point cloud data matrix into a pre-trained space transformation model to obtain a space transformation matrix; the spatial transformation model is obtained by training a sample three-dimensional point cloud data matrix coordinate distribution characteristic which is not in a preset pose and a sample three-dimensional point cloud data matrix which is manually corrected to be in the preset pose; and carrying out coordinate correction on the three-dimensional point cloud data matrix through the space transformation matrix to obtain a corrected three-dimensional point cloud data matrix.
The method comprises the steps of establishing a three-dimensional coordinate system (x, y, z) for a space where a three-dimensional camera module is located, wherein the position of the object to be detected can be represented by (x, y, z), and the direction of the object to be detected can be represented by a rotation angle (α, gamma) around each coordinate axis, carrying out pose correction on a three-dimensional point cloud data matrix, and adjusting the position and the direction of the whole object to be detected according to the three-dimensional coordinate position of each point.
The coordinate correction of the three-dimensional point cloud data matrix through the space transformation matrix to obtain a corrected three-dimensional point cloud data matrix comprises the following steps of performing coordinate correction on the three-dimensional point cloud data matrix through the following formula:
wherein,is an input quantity;representing three-dimensional coordinates of a three-dimensional point cloud data matrix before correction; a. theθRepresenting the spatial transformation matrix in a spatial transformation matrix,an expanded representation of the spatial transformation matrix;is the output quantity;and representing the three-dimensional coordinates of the corrected three-dimensional point cloud data matrix.
And S53, extracting a characteristic matrix from the corrected three-dimensional point cloud data matrix through a pre-trained convolutional neural network model.
And S54, calculating the variance value of the feature matrix.
Optionally, the calculating to obtain the variance value of the feature matrix includes calculating to obtain a variance value σ by the following formula2:
Wherein X represents any element in the feature matrix; μ represents the mean of all elements in the feature matrix; n represents the total number of elements in the feature matrix.
And S55, judging whether the variance value is within a preset variance range, wherein the preset variance range is determined according to the variance value calculated according to the feature matrix of the face of the living body sample.
And S56, if the variance value is within a preset variance range, determining that the object to be detected is a living body.
It is worth to be noted that the range of the variance value of the feature matrix is used for judging whether the object to be detected is a living body, so that the data processing amount can be reduced, and the detection result of whether the object to be detected is the living body can be obtained more quickly.
FIG. 6 illustrates a method for in vivo testing, according to an exemplary embodiment, the method comprising:
s61, acquiring a three-dimensional point cloud data matrix of the face of the object to be detected through a three-dimensional camera module, wherein the three-dimensional point cloud data matrix comprises three-dimensional coordinate information and image parameter information of each sampling point of the face of the object to be detected.
And S62, inputting the three-dimensional point cloud matrix into a pre-trained space transformation model to obtain a corrected three-dimensional point cloud data matrix, wherein the space transformation model is obtained by training the coordinate distribution characteristics of a sample three-dimensional point cloud data matrix which is not in a preset pose and the coordinate distribution characteristics of the sample three-dimensional point cloud data matrix which is manually corrected to be in the preset pose.
And S63, extracting a characteristic matrix from the corrected three-dimensional point cloud data matrix through a pre-trained convolutional neural network model.
And S64, calculating the variance value of the feature matrix.
And S65, inputting the feature matrix into a preset classified function model to obtain the probability value of the fused face image three-dimensional point cloud data matrix belonging to the living body class.
And S66, if the variance value is within a preset variance range and the probability value is greater than a preset probability threshold value, determining that the object to be detected is a living body, wherein the preset variance range is determined according to the variance value calculated according to the feature matrix of the face of the corresponding living body sample.
In the technical scheme of this embodiment, whether the object to be detected is a living body is judged by the probability value obtained according to the classification function and the variance value of the feature matrix, so that secondary verification can be realized for data results of different types, and the accuracy and the safety of living body detection are improved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are all expressed as a series of action combinations, but those skilled in the art should understand that the present invention is not limited by the described action sequence. For example, step S64 and step S65 may be executed in any order, sequentially, or in parallel. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention. For example, step S62 may not be performed, and in step S63, a feature matrix is extracted from the initially acquired three-dimensional point cloud data matrix through a pre-trained convolutional neural network model. In this example, secondary verification can be achieved by different types of data results in subsequent steps, and the effect of improving the accuracy and safety of in-vivo detection can also be achieved.
FIG. 7 is a block diagram illustrating an apparatus for in vivo testing in accordance with an exemplary embodiment. The device comprises:
the point cloud obtaining module 710 is configured to obtain a three-dimensional point cloud data matrix of a face of an object to be detected through a three-dimensional camera module, where the three-dimensional point cloud data matrix includes three-dimensional coordinate information and image parameter information of each sampling point of the face of the object to be detected;
the point cloud correction module 720 is configured to input the three-dimensional point cloud matrix into a pre-trained spatial transformation model to obtain a corrected three-dimensional point cloud data matrix, where the spatial transformation model is obtained by training a coordinate distribution feature of a sample three-dimensional point cloud data matrix that is not in a preset pose and a coordinate distribution feature of the sample three-dimensional point cloud data matrix that is artificially corrected to be in the preset pose;
and the determining module 730 is configured to determine whether the object to be detected is a living body according to the corrected three-dimensional point cloud data matrix and a pre-trained living body detection model.
The technical scheme can at least achieve the following technical effects:
the method comprises the steps of obtaining a three-dimensional point cloud data matrix of the face of an object to be detected through a three-dimensional camera module, inputting the three-dimensional point cloud matrix into a pre-trained space transformation model to obtain a corrected three-dimensional point cloud data matrix, and further determining whether the object to be detected is a living body according to the corrected three-dimensional point cloud data matrix and the pre-trained living body detection model, wherein the three-dimensional point cloud data matrix is directly obtained through the three-dimensional camera module, so that the living body is detected based on the three-dimensional point cloud data matrix subsequently, the intermediate processing link of data can be reduced to a greater extent, data loss is reduced, and the accuracy of input information of the subsequent living body detection model is ensured. The three-dimensional camera module can acquire a three-dimensional point cloud data matrix by shooting once, has higher data acquisition efficiency, reduces the information input time of a user, is quicker, can promote the uneasy perceptibility of human face living body detection, and can be widely applied to preventing illegal face attack. In addition, compared with a scheme of simply utilizing a depth map to carry out living body detection, the point cloud has more parameter information, and the attack of the 3D face model can be more effectively prevented.
Optionally, the living body detection model is a convolutional neural network model, and the determining module is configured to:
extracting a characteristic matrix from the corrected three-dimensional point cloud data matrix through a pre-trained convolutional neural network model;
calculating a variance value of the feature matrix;
judging whether the variance value is within a preset variance range, wherein the preset variance range is determined according to the variance value obtained by calculating the feature matrix of the face of the corresponding living body sample;
and if the variance value is within a preset variance range, determining that the object to be detected is a living body.
Optionally, the determining module is configured to calculate the variance value σ according to the following formula2:
Wherein X represents any element in the feature matrix; μ represents the mean of all elements in the feature matrix; n represents the total number of elements in the feature matrix.
Optionally, the point cloud rectification module is configured to:
inputting three-dimensional coordinate information in the three-dimensional point cloud data matrix into a pre-trained space transformation model to obtain a space transformation matrix; the spatial transformation model is obtained by training a sample three-dimensional point cloud data matrix coordinate distribution characteristic which is not in a preset pose and a sample three-dimensional point cloud data matrix which is manually corrected to be in the preset pose;
and carrying out coordinate correction on the three-dimensional point cloud data matrix through the space transformation matrix to obtain a corrected three-dimensional point cloud data matrix.
Optionally, the preset pose is a pose representing that the face is at an angle of front view.
Optionally, the point cloud correction module is configured to perform coordinate correction on the three-dimensional point cloud data matrix through the following formula:
wherein,is an input quantity;representing three-dimensional coordinates of a three-dimensional point cloud data matrix before correction; a. theθRepresenting the spatial transformation matrix in a spatial transformation matrix,an expanded representation of the spatial transformation matrix;is the output quantity;and representing the three-dimensional coordinates of the corrected three-dimensional point cloud data matrix.
Optionally, the image parameter information includes any of the following parameters: color parameters, reflection intensity parameters, temperature parameters.
Optionally, the apparatus further includes a prompt module, configured to perform the following operations before the three-dimensional point cloud data matrix of the face of the object to be examined is acquired by the three-dimensional camera module:
determining the distance between the face of the object to be detected and the three-dimensional camera module through a distance sensor;
if the distance is smaller than a preset distance threshold value, a three-dimensional point cloud data matrix of the face of the object to be detected is obtained through a three-dimensional camera module;
and if the distance is not smaller than the preset distance threshold value, sending prompt information for indicating that the object to be detected is close to the three-dimensional camera module.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The present disclosure provides a computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, realizes the steps of any one of the methods for in-vivo testing.
The present disclosure provides an electronic device, including:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of any of the methods for in vivo testing.
FIG. 8 is a block diagram of an electronic device shown in accordance with an example embodiment. This electronic equipment can be provided as smart mobile phone, intelligent dull and stereotyped equipment, wearable equipment, individual financial terminal etc.. As shown in fig. 8, the electronic device 800 may include: a processor 801, a memory 802. The electronic device 800 may also include one or more of a multimedia component 803, an input/output (I/O) interface 804, and a communications component 805.
The processor 801 is configured to control the overall operation of the electronic device 800, so as to complete all or part of the steps of the method for detecting a living body. The memory 802 is used to store various types of data to support operation at the electronic device 800, such as instructions for any application or method operating on the electronic device 800, as well as application-related data, such as biopsy model data, point cloud parameters, and the like, such as contact data, transceived messages, pictures, audio, video, and the like. The Memory 802 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. The multimedia components 803 may include screen and audio components. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 802 or transmitted through the communication component 805. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 804 provides an interface between the processor 801 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 805 is used for wired or wireless communication between the electronic device 800 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or a combination of one or more of them, which is not limited herein. The corresponding communication component 808 can thus include: Wi-Fi module, Bluetooth module, NFC module, etc.
In an exemplary embodiment, the electronic Device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-described method for liveness detection.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the above-described method for liveness detection is also provided. For example, the computer readable storage medium may be the memory 802 described above including program instructions executable by the processor 801 of the electronic device 800 to perform the method for liveness detection described above.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that, in the foregoing embodiments, various features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various combinations that are possible in the present disclosure are not described again.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.
Claims (11)
1. A method for in vivo testing, the method comprising:
acquiring a three-dimensional point cloud data matrix of a face of an object to be detected through a three-dimensional camera module, wherein the three-dimensional point cloud data matrix comprises three-dimensional coordinate information and image parameter information of each sampling point of the face of the object to be detected;
inputting the three-dimensional point cloud matrix into a pre-trained space transformation model to obtain a corrected three-dimensional point cloud data matrix, wherein the space transformation model is obtained by training the coordinate distribution characteristics of a sample three-dimensional point cloud data matrix which is not in a preset pose and the coordinate distribution characteristics of the sample three-dimensional point cloud data matrix which is manually corrected to be in the preset pose;
and determining whether the object to be detected is a living body according to the corrected three-dimensional point cloud data matrix and a pre-trained living body detection model.
2. The method of claim 1, wherein the liveness detection model is a convolutional neural network model; determining whether the object to be detected is a living body according to the corrected three-dimensional point cloud data matrix and a pre-trained living body detection model, wherein the determining comprises the following steps:
extracting a characteristic matrix from the corrected three-dimensional point cloud data matrix through a pre-trained convolutional neural network model;
calculating a variance value of the feature matrix;
judging whether the variance value is within a preset variance range, wherein the preset variance range is determined according to the variance value obtained by calculating the feature matrix of the face of the corresponding living body sample;
and if the variance value is within a preset variance range, determining that the object to be detected is a living body.
3. The method of claim 2, wherein the calculating a variance value of the feature matrix comprises:
the variance value sigma is calculated by the following formula2:
Wherein X represents any element in the feature matrix; μ represents the mean of all elements in the feature matrix; n represents the total number of elements in the feature matrix.
4. The method of claim 1, wherein inputting the three-dimensional point cloud matrix into a pre-trained spatial transformation model to obtain a rectified three-dimensional point cloud data matrix comprises:
inputting three-dimensional coordinate information in the three-dimensional point cloud data matrix into a pre-trained space transformation model to obtain a space transformation matrix; the spatial transformation model is obtained by training a sample three-dimensional point cloud data matrix coordinate distribution characteristic which is not in a preset pose and a sample three-dimensional point cloud data matrix which is manually corrected to be in the preset pose;
and carrying out coordinate correction on the three-dimensional point cloud data matrix through the space transformation matrix to obtain a corrected three-dimensional point cloud data matrix.
5. The method according to claim 4, wherein the preset pose is a pose characterizing the face in an angle of elevation.
6. The method of claim 4, wherein the coordinate correcting the three-dimensional point cloud data matrix by the spatial transformation matrix to obtain a corrected three-dimensional point cloud data matrix comprises:
coordinate correction is carried out on the three-dimensional point cloud data matrix through the following formula:
wherein,is an input quantity;representing three-dimensional coordinates of a three-dimensional point cloud data matrix before correction; a. theθRepresenting the spatial transformation matrix in a spatial transformation matrix,an expanded representation of the spatial transformation matrix;is the output quantity;and representing the three-dimensional coordinates of the corrected three-dimensional point cloud data matrix.
7. The method according to any one of claims 1 to 6, wherein the image parameter information includes any of the following parameters:
color parameters, reflection intensity parameters, temperature parameters.
8. The method according to any one of claims 1 to 5, wherein before said acquiring by means of the three-dimensional camera module a three-dimensional point cloud data matrix of the face of the object to be examined, the method further comprises:
determining the distance between the face of the object to be detected and the three-dimensional camera module through a distance sensor;
if the distance is smaller than a preset distance threshold value, a three-dimensional point cloud data matrix of the face of the object to be detected is obtained through a three-dimensional camera module;
and if the distance is not smaller than the preset distance threshold value, sending prompt information for indicating that the object to be detected is close to the three-dimensional camera module.
9. A device for in vivo testing, the device comprising:
the system comprises a point cloud acquisition module, a three-dimensional camera module and a data processing module, wherein the point cloud acquisition module is used for acquiring a three-dimensional point cloud data matrix of a face of an object to be detected through the three-dimensional camera module, and the three-dimensional point cloud data matrix comprises three-dimensional coordinate information and image parameter information of each sampling point of the face of the object to be detected;
the point cloud correction module is used for inputting the three-dimensional point cloud matrix into a pre-trained space transformation model to obtain a corrected three-dimensional point cloud data matrix, wherein the space transformation model is obtained by training the coordinate distribution characteristics of a sample three-dimensional point cloud data matrix which is not in a preset pose and the coordinate distribution characteristics of the sample three-dimensional point cloud data matrix which is manually corrected to be in the preset pose;
and the determining module is used for determining whether the object to be detected is a living body according to the corrected three-dimensional point cloud data matrix and a pre-trained living body detection model.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
11. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910239825.1A CN110059579B (en) | 2019-03-27 | 2019-03-27 | Method and apparatus for in vivo testing, electronic device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910239825.1A CN110059579B (en) | 2019-03-27 | 2019-03-27 | Method and apparatus for in vivo testing, electronic device, and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110059579A true CN110059579A (en) | 2019-07-26 |
CN110059579B CN110059579B (en) | 2020-09-04 |
Family
ID=67317460
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910239825.1A Active CN110059579B (en) | 2019-03-27 | 2019-03-27 | Method and apparatus for in vivo testing, electronic device, and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110059579B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111160233A (en) * | 2019-12-27 | 2020-05-15 | 中国科学院苏州纳米技术与纳米仿生研究所 | Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance |
CN112102496A (en) * | 2020-09-27 | 2020-12-18 | 安徽省农业科学院畜牧兽医研究所 | Cattle physique measuring method, model training method and system |
CN112102506A (en) * | 2020-09-25 | 2020-12-18 | 北京百度网讯科技有限公司 | Method, device and equipment for acquiring sampling point set of object and storage medium |
CN112308722A (en) * | 2019-07-28 | 2021-02-02 | 四川谦泰仁投资管理有限公司 | Aquaculture insurance declaration request verification system based on infrared camera shooting |
CN113970922A (en) * | 2020-07-22 | 2022-01-25 | 商汤集团有限公司 | Point cloud data processing method and intelligent driving control method and device |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1426760A (en) * | 2001-12-18 | 2003-07-02 | 中国科学院自动化研究所 | Identity discriminating method based on living body iris |
CN105184246A (en) * | 2015-08-28 | 2015-12-23 | 北京旷视科技有限公司 | Living body detection method and living body detection system |
CN105260731A (en) * | 2015-11-25 | 2016-01-20 | 商汤集团有限公司 | Human face living body detection system and method based on optical pulses |
CN105320939A (en) * | 2015-09-28 | 2016-02-10 | 北京天诚盛业科技有限公司 | Iris biopsy method and apparatus |
CN105320947A (en) * | 2015-11-04 | 2016-02-10 | 博宏信息技术有限公司 | Face in-vivo detection method based on illumination component |
CN105550668A (en) * | 2016-01-25 | 2016-05-04 | 东莞市中控电子技术有限公司 | Apparatus for collecting biological features of living body and method for identifying biological features of living body |
CN105740775A (en) * | 2016-01-25 | 2016-07-06 | 北京天诚盛业科技有限公司 | Three-dimensional face living body recognition method and device |
CN105740780A (en) * | 2016-01-25 | 2016-07-06 | 北京天诚盛业科技有限公司 | Method and device for human face in-vivo detection |
CN105740778A (en) * | 2016-01-25 | 2016-07-06 | 北京天诚盛业科技有限公司 | Improved three-dimensional human face in-vivo detection method and device thereof |
CN105844206A (en) * | 2015-01-15 | 2016-08-10 | 北京市商汤科技开发有限公司 | Identity authentication method and identity authentication device |
CN106778518A (en) * | 2016-11-24 | 2017-05-31 | 汉王科技股份有限公司 | A kind of human face in-vivo detection method and device |
US20180173980A1 (en) * | 2016-12-15 | 2018-06-21 | Beijing Kuangshi Technology Co., Ltd. | Method and device for face liveness detection |
CN108280418A (en) * | 2017-12-12 | 2018-07-13 | 北京深醒科技有限公司 | The deception recognition methods of face image and device |
CN108319901A (en) * | 2018-01-17 | 2018-07-24 | 百度在线网络技术(北京)有限公司 | Biopsy method, device, computer equipment and the readable medium of face |
CN108764091A (en) * | 2018-05-18 | 2018-11-06 | 北京市商汤科技开发有限公司 | Biopsy method and device, electronic equipment and storage medium |
US20180349682A1 (en) * | 2017-05-31 | 2018-12-06 | Facebook, Inc. | Face liveness detection |
CN109034029A (en) * | 2018-07-17 | 2018-12-18 | 新疆玖富万卡信息技术有限公司 | Detect face identification method, readable storage medium storing program for executing and the electronic equipment of living body |
-
2019
- 2019-03-27 CN CN201910239825.1A patent/CN110059579B/en active Active
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1426760A (en) * | 2001-12-18 | 2003-07-02 | 中国科学院自动化研究所 | Identity discriminating method based on living body iris |
CN105844206A (en) * | 2015-01-15 | 2016-08-10 | 北京市商汤科技开发有限公司 | Identity authentication method and identity authentication device |
CN105184246A (en) * | 2015-08-28 | 2015-12-23 | 北京旷视科技有限公司 | Living body detection method and living body detection system |
CN105320939A (en) * | 2015-09-28 | 2016-02-10 | 北京天诚盛业科技有限公司 | Iris biopsy method and apparatus |
CN105320947A (en) * | 2015-11-04 | 2016-02-10 | 博宏信息技术有限公司 | Face in-vivo detection method based on illumination component |
CN105260731A (en) * | 2015-11-25 | 2016-01-20 | 商汤集团有限公司 | Human face living body detection system and method based on optical pulses |
CN105740778A (en) * | 2016-01-25 | 2016-07-06 | 北京天诚盛业科技有限公司 | Improved three-dimensional human face in-vivo detection method and device thereof |
CN105740780A (en) * | 2016-01-25 | 2016-07-06 | 北京天诚盛业科技有限公司 | Method and device for human face in-vivo detection |
CN105740775A (en) * | 2016-01-25 | 2016-07-06 | 北京天诚盛业科技有限公司 | Three-dimensional face living body recognition method and device |
CN105550668A (en) * | 2016-01-25 | 2016-05-04 | 东莞市中控电子技术有限公司 | Apparatus for collecting biological features of living body and method for identifying biological features of living body |
CN106778518A (en) * | 2016-11-24 | 2017-05-31 | 汉王科技股份有限公司 | A kind of human face in-vivo detection method and device |
US20180173980A1 (en) * | 2016-12-15 | 2018-06-21 | Beijing Kuangshi Technology Co., Ltd. | Method and device for face liveness detection |
US20180349682A1 (en) * | 2017-05-31 | 2018-12-06 | Facebook, Inc. | Face liveness detection |
CN108280418A (en) * | 2017-12-12 | 2018-07-13 | 北京深醒科技有限公司 | The deception recognition methods of face image and device |
CN108319901A (en) * | 2018-01-17 | 2018-07-24 | 百度在线网络技术(北京)有限公司 | Biopsy method, device, computer equipment and the readable medium of face |
CN108764091A (en) * | 2018-05-18 | 2018-11-06 | 北京市商汤科技开发有限公司 | Biopsy method and device, electronic equipment and storage medium |
CN109034029A (en) * | 2018-07-17 | 2018-12-18 | 新疆玖富万卡信息技术有限公司 | Detect face identification method, readable storage medium storing program for executing and the electronic equipment of living body |
Non-Patent Citations (5)
Title |
---|
ANDREA LAGORIO 等: "Liveness detection based on 3D face shape analysis", 《2013 INTERNATIONAL WORKSHOP ON BIOMETRICS AND FORENSICS (IWBF)》 * |
K MOHAN 等: "A combined HOG-LPQ with Fuz-SVM classifier for Object face Liveness Detection", 《2017 INTERNATIONAL CONFERENCE ON I-SMAC (IOT IN SOCIAL, MOBILE, ANALYTICS AND CLOUD) (I-SMAC)》 * |
SENGUR, A 等: "Deep Feature Extraction for Face Liveness Detection", 《2018 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND DATA PROCESSING (IDAP)》 * |
吴继鹏 等: "基于FS-LBP特征的人脸活体检测方法", 《集美大学学报》 * |
李兰影: "身份认证中的活性人脸检测技术的研究", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112308722A (en) * | 2019-07-28 | 2021-02-02 | 四川谦泰仁投资管理有限公司 | Aquaculture insurance declaration request verification system based on infrared camera shooting |
CN111160233A (en) * | 2019-12-27 | 2020-05-15 | 中国科学院苏州纳米技术与纳米仿生研究所 | Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance |
CN111160233B (en) * | 2019-12-27 | 2023-04-18 | 中国科学院苏州纳米技术与纳米仿生研究所 | Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance |
CN113970922A (en) * | 2020-07-22 | 2022-01-25 | 商汤集团有限公司 | Point cloud data processing method and intelligent driving control method and device |
WO2022017131A1 (en) * | 2020-07-22 | 2022-01-27 | 商汤集团有限公司 | Point cloud data processing method and device, and intelligent driving control method and device |
CN113970922B (en) * | 2020-07-22 | 2024-06-14 | 商汤集团有限公司 | Point cloud data processing method, intelligent driving control method and device |
CN112102506A (en) * | 2020-09-25 | 2020-12-18 | 北京百度网讯科技有限公司 | Method, device and equipment for acquiring sampling point set of object and storage medium |
CN112102506B (en) * | 2020-09-25 | 2023-07-07 | 北京百度网讯科技有限公司 | Acquisition method, device, equipment and storage medium for sampling point set of object |
CN112102496A (en) * | 2020-09-27 | 2020-12-18 | 安徽省农业科学院畜牧兽医研究所 | Cattle physique measuring method, model training method and system |
CN112102496B (en) * | 2020-09-27 | 2024-03-26 | 安徽省农业科学院畜牧兽医研究所 | Cattle physique measurement method, model training method and system |
Also Published As
Publication number | Publication date |
---|---|
CN110059579B (en) | 2020-09-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110059579B (en) | Method and apparatus for in vivo testing, electronic device, and storage medium | |
WO2019218621A1 (en) | Detection method for living being, device, electronic apparatus, and storage medium | |
CN112052831B (en) | Method, device and computer storage medium for face detection | |
CN111104833A (en) | Method and apparatus for in vivo examination, storage medium, and electronic device | |
CN112232155B (en) | Non-contact fingerprint identification method and device, terminal and storage medium | |
RU2431190C2 (en) | Facial prominence recognition method and device | |
CN111444744A (en) | Living body detection method, living body detection device, and storage medium | |
CA3152812A1 (en) | Facial recognition method and apparatus | |
CN112232163B (en) | Fingerprint acquisition method and device, fingerprint comparison method and device, and equipment | |
CN112052830B (en) | Method, device and computer storage medium for face detection | |
CN112801057A (en) | Image processing method, image processing device, computer equipment and storage medium | |
CN112016525A (en) | Non-contact fingerprint acquisition method and device | |
CN111598065B (en) | Depth image acquisition method, living body identification method, apparatus, circuit, and medium | |
CN110532746B (en) | Face checking method, device, server and readable storage medium | |
KR20210069404A (en) | Liveness test method and liveness test apparatus | |
CN112232159B (en) | Fingerprint identification method, device, terminal and storage medium | |
CN110008943B (en) | Image processing method and device, computing equipment and storage medium | |
WO2022068931A1 (en) | Non-contact fingerprint recognition method and apparatus, terminal, and storage medium | |
CN108399401B (en) | Method and device for detecting face image | |
CN110991412A (en) | Face recognition method and device, storage medium and electronic equipment | |
CN112232157B (en) | Fingerprint area detection method, device, equipment and storage medium | |
KR20200083188A (en) | Method and apparatus for detecting liveness and object recognition method using same | |
CN114898447B (en) | Personalized fixation point detection method and device based on self-attention mechanism | |
CN116524609A (en) | Living body detection method and system | |
CN112232152B (en) | Non-contact fingerprint identification method and device, terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |